In this paper, we describe a novel bidding strategy that autonomous trading agents can use to participate in Continuous Double Auctions (CDAs). Our strategy is based on both short and long-term learning that allows such agents to adapt their bidding behaviour to be ecient in a wide variety of environments. For the short-term learning, the agent updates the aggressiveness of its bidding behaviour (more aggressive means it will trade o prot to improve its chance of transacting, less aggressive that it targets more protable transactions and is willing to trade off its chance of transacting to achieve them) based on market information observed after any bid or ask appears in the market. The long-term learning then determines how this aggressiveness factor in uences an agent's choice of which bids or asks to submit in the market, and is based on market information observed after every transaction (successfully matched bid and ask). The principal motivation for the short-term learning is to enable the agent to immediately respond to market fluctuations, while for the long-term learning it is to adapt to broader trends in the way in which the market demand and supply changes over time. We benchmark our strategy against the current state of the art (ZIP and GDX) and show that it outperforms these benchmarks in both static and dynamic environments. This is true both when the population is homogeneous (where the increase in efficiency is up to 5.2%) and heterogeneous (in which case there is a 0.85 probability of our strategy being adopted in a two-population evolutionary game theoretic analysis).