Reward is the driving pressure for reinforcement studying (RL) brokers. Given its central position in RL, reward is commonly assumed to be suitably common in its expressivity, as summarized by Sutton and Littman’s reward speculation:
“…all of what we imply by objectives and functions might be effectively regarded as maximization of the anticipated worth of the cumulative sum of a acquired scalar sign (reward).”
– SUTTON (2004), LITTMAN (2017)
In our work, we take first steps towards a scientific research of this speculation. To take action, we think about the next thought experiment involving Alice, a designer, and Bob, a studying agent:

We suppose that Alice thinks of a job she may like Bob to be taught to resolve – this job could possibly be within the type a a pure language description (“stability this pole”), an imagined state of affairs (“attain any of the profitable configurations of a chess board”), or one thing extra conventional like a reward or worth perform. Then, we think about Alice interprets her alternative of job into some generator that can present studying sign (comparable to reward) to Bob (a studying agent), who will be taught from this sign all through his lifetime. We then floor our research of the reward speculation by addressing the next query: given Alice’s alternative of job, is there at all times a reward perform that may convey this job to Bob?
What’s a job?
To make our research of this query concrete, we first prohibit focus to 3 sorts of job. Particularly, we introduce three job sorts that we imagine seize smart sorts of duties: 1) A set of acceptable insurance policies (SOAP), 2) A coverage order (PO), and three) A trajectory order (TO). These three types of duties characterize concrete cases of the sorts of job we would need an agent to be taught to resolve.
.jpg)
We then research whether or not reward is able to capturing every of those job sorts in finite environments. Crucially, we solely focus consideration on Markov reward capabilities; as an illustration, given a state house that’s adequate to type a job comparable to (x,y) pairs in a grid world, is there a reward perform that solely is determined by this similar state house that may seize the duty?
First Major End result
Our first predominant end result reveals that for every of the three job sorts, there are environment-task pairs for which there isn’t a Markov reward perform that may seize the duty. One instance of such a pair is the “go all the best way across the grid clockwise or counterclockwise” job in a typical grid world:
.jpg)
This job is of course captured by a SOAP that consists of two acceptable insurance policies: the “clockwise” coverage (in blue) and the “counterclockwise” coverage (in purple). For a Markov reward perform to specific this job, it will have to make these two insurance policies strictly increased in worth than all different deterministic insurance policies. Nonetheless, there isn’t a such Markov reward perform: the optimality of a single “transfer clockwise” motion will depend upon whether or not the agent was already shifting in that path prior to now. For the reason that reward perform have to be Markov, it can’t convey this type of data. Comparable examples display that Markov reward can’t seize each coverage order and trajectory order, too.
Second Major End result
Provided that some duties might be captured and a few can’t, we subsequent discover whether or not there’s an environment friendly process for figuring out whether or not a given job might be captured by reward in a given setting. Additional, if there’s a reward perform that captures the given job, we might ideally like to have the ability to output such a reward perform. Our second result’s a optimistic end result which says that for any finite environment-task pair, there’s a process that may 1) resolve whether or not the duty might be captured by Markov reward within the given setting, and a couple of) outputs the specified reward perform that precisely conveys the duty, when such a perform exists.
This work establishes preliminary pathways towards understanding the scope of the reward speculation, however there’s a lot nonetheless to be achieved to generalize these outcomes past finite environments, Markov rewards, and easy notions of “job” and “expressivity”. We hope this work gives new conceptual views on reward and its place in reinforcement studying.