August 25, 2022

Maximum Entropy Inverse Reinforcement Learning

#machine-learning

#reinforcement-learning

#inverse-reinforcement-learning

Disclaimer: This post is an adapted version of the theoretical part of a Jupyter Notebook that I wrote in 2019 for a seminar on Imitation Learning. You can find it, an accompanying presentation, and a small framework at github.com/qzed/irl-maxent.

I highly encourage you to have a look at that notebook in case you're interested in the code behind this theory.

Let us start with some context and a brief overview of the problem we're trying to solve: Reinforcement learning (RL) aims to provide a framework for finding the optimal behavior (i.e., an optimal policy) of intelligent agents regarding some environment they interact with by directing them via a reward signal, measuring their performance. In inverse reinforcement learning (IRL), somewhat conversely, we try to recover the reward function (and therefore policy) of an agent given a set of demonstrations performed by that agent. At first, this may seem to stand somewhat opposed to the larger goal of creating intelligent agents, however, we can use the demonstration of experts ― often humans ― to improve our agents by guiding them to a better solution.

An important aspect of IRL is the recovery of the expert's reward function, which in many classical reinforcement-learning scenarios is often hand-designed. This also sets it apart from behavioral cloning (BC), another category of methods under the larger imitation learning (IL) umbrella [see Osa et al. 2018], which focuses on retrieving the expert's policy instead of the reward. The general idea is that by analyzing the actions taken by an expert, we can infer a better reward function than we could manually design.

Maximum entropy (MaxEnt) inverse reinforcement learning [Ziebart et al. 2008] is a powerful and clever, yet fairly simple way to achieve this. Having discussed the “why”, let us now get to the “how”:

While I assume some familiarity with the basics of reinforcement and inverse reinforcement learning, here's a short overview: In both settings, we describe our world using a Markov decision process (MDP). An MDP is a 5-tuple (S,A,T,R,γ)(S, A, T, R, \gamma) with states SS, actions AA, transition model T(st,at,st+1)=p(st+1st,at)T(s_t, a_t, s_{t+1}) = p(s_{t+1} \mid s_t, a_t), reward function R(st,at,st+1)RR(s_t, a_t, s_{t+1}) \in \mathbb{R}, and reward discount factor γ[0,1]\gamma \in [0, 1], decreasing the reward over time (where atAa_t \in A and st,st+1Ss_t, s_{t+1} \in S). In the inverse reinforcement learning setting, we will only use per-state rewards R(s)RR(s) \in \mathbb{R}. The key part of an MDP is that any action relies solely on the current state, meaning complete independence of all preceding history.

The goal of reinforcement learning then is to find an optimal sequence of actions, i.e., a policy describing the optimal action per state, for an autonomous agent given a reward signal RR. The reward signal provides feedback to the agent, affirming its behavior on actions that help further the goal of the task at hand, and punishing it on actions that are counter-productive to this goal. Policies come in two flavors: deterministic and stochastic. Deterministic policies are defined as function π:SA\pi: S \to A, whereas stochastic policies are defined as probability distributions over actions, i.e., π(as)\pi(a \mid s). While in the usual RL setting the (or at least one) optimal policy is guaranteed to be deterministic, in inverse reinforcement learning (and not only there), it makes more sense to talk about stochastic policies.

Inverse reinforcement learning, as discussed above and the name suggests, does the inverse of RL. Given a set of demonstrations D={τi}i=1n\mathcal{D} = \{\tau_i\}_{i=1}^{n} of an expert (agent), consisting of trajectories τ=((s1,a1),(s2,a2),,(sn1,an1),sn)\tau = \left( (s_1, a_1), (s_2, a_2), \ldots, (s_{n-1}, a_{n-1}), s_n \right) through the state-action space, IRL aims to recover the underlying reward function RR, which explains the behavior of the expert.

To allow for computation with states, we use so-called features. Features are, in this case, defined as a function ϕ:SRd\phi: S \to \mathbb{R}^d, mapping states to dd-dimensional vectors, and can be arbitrarily chosen. In general, we assume that ϕ(τ)=t=1τϕ(st)\phi(\tau) = \sum_{t = 1}^{|\tau|} \phi(s_t) where τ|\tau| denotes the length of the trajectories (visited states), i.e., that the features of a trajectory τ\tau are the sum of the features of the states visited by this trajectory, counting repetitions.

The general idea of Maximum Entropy Inverse Reinforcement Learning is based on feature-expectation matching [Abbeel and Ng 2004]: The expected visitation-frequency of features in the demonstrations provided by the expert should equal the expected visitation-frequency of features visited by the agent following the recovered reward function. In other words: The features describing the states of our world should be, in expectation, visited equally often by the agent following our recovered reward function and by the expert demonstrating (nearly) optimal behavior. In equations, we can express this as the expectation of features ϕ\phi over trajectories τ\tau derived from the reward we want to recover, and

EπL[ϕ(τ)]=EπE[ϕ(τ)], \mathbb{E}_{\pi^L}[\phi(\tau)] = \mathbb{E}_{\pi^E}[\phi(\tau)],

where the learner follows a policy πL\pi^L directly implied by the reward. The expert follows an implicit policy πE\pi^E that encodes its notion of reward and optimality.

Note that we can also express the expected feature visitation frequency as sum over trajectories

EπL[ϕ(τ)]=τpπL(τ)ϕ(τ) \mathbb{E}_{\pi^L}[\phi(\tau)] = \sum_{\tau} p_{\pi^L}(\tau) \phi(\tau)

with the probability distribution pπL(τ)p_{\pi^L}(\tau) of trajectories of the learner. Finding such a match, however, is not a well-posed problem in accordance to the definition by Hadamard: There is not the single one, but many possible solutions. Multiple reward functions can achieve the same expected feature-visitation frequency [Ng et al. 1999; Abbeel and Ng 2004; Ziebart 2010]. To resolve this ambiguity, Ziebart et al. [Ziebart et al. 2008] propose to choose the solution (i.e., the pπLp_{\pi^L}) maximizing the entropy.

But why should we maximize the entropy? To answer this, we have to understand what entropy represents. Firstly, the entropy H(p)H(p) of some event probability distribution pp is defined as

H(p)=xXp(x)log2p(x),=xXp(x)log21p(x),\begin{align*} H(p) &= -\sum_{x \in \mathcal{X}} p(x) \log_2 p(x), \\ &= \sum_{x \in \mathcal{X}} p(x) \cdot \log_2 \frac{1}{p(x)}, \end{align*}

where XX is the set containing all events xx and p(x)p(x) the probability of event xx occurring.

Let us now assume that we want to communicate the occurrence of all events xx with some other party efficiently. To accomplish this, we wish to minimize the expected message length, so messages that will be sent frequently should be short, and messages that will seldom be sent are allowed to be long. Following this trail of thought leads to the term h(x)=log2(1/p(x))h(x) = \log_2(1/p(x)), which is the optimal number of bits that we should spend on the message representing event xx. With lower probability p(x)p(x), the term log2(1/p(x))\log_2(1/p(x)) increases. In fact, it increases logarithmically, as with more bits available, we can encode exponentially more events. Using this observation for the entropy HH then yields nothing more than the entropy being the lower limit of the expected message length over all messages, i.e.:

H(p)=Ep(x)[h(x)]. H(p) = \mathbb{E}_{p(x)}\left[h(x)\right].

Summarizing all of this in a single sentence: Entropy is a measure of uncertainty.

Note: This is, in fact, the underlying principle of Huffman coding, which, due to this relation, is guaranteed to be optimal.

As is done generally, we will now use the normal logarithm instead of the base-two one. Since log2(x)=log(x)/log(2)\log_2(x) = \log(x)/\log(2) holds, we are only dropping a constant factor of log(2)\log(2) which does not impact extrema or our reasoning below, and simplifies further computations.

With that in mind, let us ask again: Why should we maximize the entropy? Now we can give a simple reasoning [Jaynes 1957]: Let us assume that we have two distributions, pp and qq, with entropy values H(p)>H(q)H(p) > H(q). From our previous observations, we now know that to encode messages from pp we, in expectancy, need more bits than to encode messages from qq. This is the key point because this in turn also means that we have less information about pp than about qq. If we know more about a distribution, we require fewer bits to encode events from it. If we know everything, we do not need to send/encode anything, and if we know nothing, we need to send everything.

But isn't it good if we know more? “The less entropy, the better, right?” you may ask. No. Not for many optimization problems, or in general, when working with incomplete information: Consider a problem, like feature-expectation matching, which has multiple solutions fulfilling our constraints. The only information which we give our solver for this problem is the feature-expectations we want to replicate. All solutions provided by our solver contain this information. Yet, they most certainly have different entropy values. This must mean that (at least) some solutions have additional information that we did not provide, i.e., they have a bias. This is illustrated in the figure below for our two distributions pp and qq:

Illustration of the Maximum Entropy Principle

By choosing the solution with maximum entropy, we choose the solution with minimal information. Note that all solutions already satisfy our constraints; thus we actually pick the solution that best fits our information with a minimal bias.

Applying our findings from the previous section to feature-expectation matching directly results in a constrained optimization problem:

arg maxpH(p)subject toEπL[ϕ(τ)]=EπE[ϕ(τ)],τp(τ)=1,τ:p(τ)>0,\begin{alignat*}{2} \argmax_p\quad && H(p) \\ \text{subject to}\quad \label{eq:feature-constraints} && \mathbb{E}_{\pi^L}[\phi(\tau)] &= \mathbb{E}_{\pi^E}[\phi(\tau)], \tag{1a} \\ \label{eq:probability-constraints} && \sum_{\tau} p(\tau) = 1, &\quad \forall \tau : p(\tau) > 0, \tag{1b} \end{alignat*}

where Equation (1a) represents the feature-matching constraint and Equation (1b) the probability constraints.

This problem expresses that we want to find a probability distribution p(τ)p(\tau) over trajectories τ\tau with maximum entropy, under which the expected feature visitation frequencies of learner EπL[ϕ(τ)]\mathbb{E}_{\pi^L}[\phi(\tau)] and expert EπE[ϕ(τ)]\mathbb{E}_{\pi^E}[\phi(\tau)] match (that ϕ(τ)\phi(\tau) represents the features visited by trajectory τ\tau). Note that the policy πL\pi^L of the learner depends on the probability distribution p(τ)p(\tau) with

πL(as)τ: (s,a)τt=0p(τ),(2) \label{eq:policy} \pi^L(a \mid s) \propto \sum_{\tau :\ (s, a) \in \tau_{t=0}} p(\tau), \tag{2}

i.e., the policy of the learner is proportional to the sum of the probabilities of all trajectories performing action aa in the starting state ss.

Now we have the probability distribution over trajectories, but how does this help us to recover the reward function R:SRR: S \to \mathbb{R}? Following the school of basic reinforcement learning provides us with the answer: The reward dictates the policy, which in turn dictates the trajectory distribution p(τ)p(\tau). A key assumption of Maximum Entropy IRL is that we know the behavior of our MDP, i.e., we have a perfect model representing the transition probabilities p(st+1st,at)p(s_{t+1} \mid s_t, a_t), so the only thing we actually need to learn is the reward. Everything else either is derived from it or assumed known. To be able to learn the reward, we must parameterize it. For this, we assume linearity regarding features, i.e.,

R(τ)=ωϕ(τ), R(\tau) = \omega^\top \phi(\tau),

with ωRd\omega \in \mathbb{R}^d being the reward parameter vector. To make the dependency on the reward parameters explicit, we write p(τω)p(\tau \mid \omega) and π(as,ω)\pi(a \mid s, \omega).

Note: The choice of a linear parametrization is not by chance, but a direct result of the feature expectation matching constraint: Given an expert solving an MDP with unknown reward function linear in a set of known features, performing feature expectation matching with those features is guaranteed to yield a policy performing comparable to or better than the expert with respect to that reward [Abbeel and Ng 2004].

Solving the above mathematical program for MDPs with deterministic transition dynamics (i.e., MDPs where all actions have predetermined deterministic outcomes) using the method of Lagrange multipliers leads to the observation that

pd(τω)=1Z(ω)exp(ωϕ(τ)),Z(ω)=τexp(ωϕ(τ))\begin{align*} p_d(\tau \mid \omega) &= \frac{1}{Z(\omega)} \exp \left( \omega^\top \phi(\tau) \right), \\ Z(\omega) &= \sum_{\tau} \exp \left( \omega^\top \phi(\tau) \right) \end{align*}

with the so-called partition function Z(ω)Z(\omega) simply normalizing the values to fulfill the probability distribution constraint [Ziebart et al. 2008; Osa et al. 2018]. By choosing the reward to linearly depend on the features, we have also ensured that the Lagrange multipliers are precisely the reward parameters ω\omega [Osa et al. 2018]. For MDPs with stochastic transitions, Ziebart et al. [Ziebart et al. 2008] propose to modify the deterministic solution, yielding the approximation

ps(τω)1Z(ω)exp(ωϕ(τ))st+1,at,stτp(st+1st,at).(3) \label{eq:approx-probability} p_s(\tau \mid \omega) \approx \frac{1}{Z(\omega)} \exp\left( \omega^\top \phi(\tau) \right) \prod_{s_{t+1}, a_t, s_t \in \tau} p(s_{t+1} \mid s_t, a_t). \tag{3}

Note: We are simply multiplying the deterministic solution with combined transition probability of the trajectory. As a result of said modification, however, we introduce a bias to the reward function [Osa et al. 2018]. To avoid this, Ziebart [Ziebart 2010] proposes the Maximum Causal Entropy IRL method in his thesis [see also Ziebart et al. 2013]. In the scope of this post, we'll stick to the normal (non-causal) entropy.

We now know the parameterized form of the distribution, so how can we optimize the parameters? The answer is fairly straightforward: We can maximize the log-likelihood over all demonstrated trajectories D={τi}i=1N\mathcal{D} = \{\tau_i\}_{i=1}^{N}, i.e.,

ω=arg maxωL(ω)=arg maxωτDlogp(τω),(4) \label{eq:loglikelihood} \omega^* = \argmax_{\omega} \mathcal{L}(\omega) = \argmax_{\omega} \sum_{\tau \in \mathcal{D}} \log p(\tau \mid \omega), \tag{4}

using gradient ascent (or any other gradient-based optimization technique) as this function is convex [Ziebart et al. 2008; Osa et al. 2018].

Note: Maximizing the log-likelihood is equivalent to minimizing the Kullback-Leibler divergence [Bishop 2006]

DKL(pE(τ)p(τω)).D_{KL}\big(p^E(\tau) \,\|\, p(\tau \mid \omega)\big).

It can therefore also be interpreted as the M-projection onto the manifold of the maximum entropy distribution [see e.g., Osa et al. 2018].

The (averaged) gradient is then simply the difference between expected empirical feature counts EπE[ϕ(τ)]\mathbb{E}_{\pi^E} \left[ \phi(\tau) \right] and the learner's expected feature counts (derived from the trajectory probabilities p(τω)p(\tau \mid \omega)), shown in Equation (5a), i.e.:

ωL(ω)=EπE[ϕ(τ)]τp(τω) ϕ(τ),=EπE[ϕ(τ)]siDsiϕ(si),\begin{align*} \label{eq:grad-a} \nabla_\omega \mathcal{L}(\omega) &= \mathbb{E}_{\pi^E} \left[ \phi(\tau) \right] - \sum_{\tau} p(\tau \mid \omega)\ \phi(\tau), \tag{5a}\\ \label{eq:grad-b} &= \mathbb{E}_{\pi^E} \left[ \phi(\tau) \right] - \sum_{s_i} D_{s_i} \phi(s_i), \tag{5b}\\ \end{align*}

which we can express in terms of the state-visitation frequency DsiD_{s_i} with Equation (5b), assuming ϕ(τ)=t=0τϕ(st)\phi(\tau) = \sum_{t = 0}^{|\tau|} \phi(s_t) [Ziebart et al. 2008].

Note: The realization that we can transform Equation (5a) to Equation (5b), i.e., represent the expected feature count of the learner via the state-visitation frequency DsiD_{s_i}, is actually one of the most important steps!

Equation (5a) is generally infeasible (or at the very least quite costly) to compute, as enumerating all trajectories is usually subject to combinatorial explosion (think, for example, of trajectories in a “simple” grid world). Computing the state visitation frequencies, as we will see below, however, is quite feasible (within some restrictions).

Note: Due to the log-likelihood, this gradient holds for both deterministic and stochastic transition dynamics.

Now, we have only one remaining question: How do we compute the state-visitation frequency DsiD_{s_i}? Following good computer-science fashion, we divide and conquer: First, we compute a (stochastic) policy πME(as,ω)\pi_{\text{ME}}(a \mid s, \omega), referred to as the local action probability, with a backward pass. Thereafter, we compute the state-visitation frequency under this policy with a simple forward pass.

The basic idea of the first pass is to remember our earlier observation about the learner policy πL(as)\pi^L(a \mid s) made in Equation (2), i.e.,

πME(ajsi,ω)τ: (si,aj)τt=0p(τω), \pi_{\text{ME}}(a_j \mid s_i, \omega) \propto \sum_{\tau :\ (s_i, a_j) \in \tau_{t=0}} p(\tau \mid \omega),

and then recursively expand this using the solution we derived for our trajectory distribution p(τω)p(\tau \mid \omega) presented in Equation (3) until we are not reasoning about full or partial trajectories any longer, but the individual state transitions. This then yields the set of equations [Ziebart et al. 2008]:

πME(ajsi,ω)=Zsi,ajZsj,Zsi,aj=kp(sksi,aj)exp(ωϕ(si))Zsk,Zsi=ajZsi,aj.\begin{align*} \label{eq:bwpass} \pi_{\text{ME}}(a_j \mid s_i, \omega) &= \frac{Z_{s_i, a_j}}{Z_{s_j}}, \tag{6a} \\ Z_{s_i, a_j} &= \sum_{k} p(s_k \mid s_i, a_j)\, \exp \left(\omega^\top \phi(s_i)\right)\, Z_{s_k}, \tag{6b} \\ Z_{s_i} &= \sum_{a_j} Z_{s_i, a_j}. \tag{6c} \end{align*}

Here, ZsiZ_{s_i} represents the state-partition-function, again normalizing the probabilities, and Zsi,aiZ_{s_i, a_i} is referred to as the state-action partition function. Setting the state-partition function Zsi=1Z_{s_i} = 1 for terminal states and then recursively backing up, using the equations for ZsiZ_{s_i} and Zsi,ajZ_{s_i, a_j}, from those states yields a set of values that we can then use to compute the local action probabilities following the above equation.

Note: This backup-pass somewhat resembles a value-iteration scheme to solve an MDP. It also turns out that this backup part of the algorithm is the only thing that differs between Maximum Entropy and Maximum Causal Entropy IRL.

In fact, for the Maximum Causal Entropy approach, one can show that the partition functions are an analogy of the Bellman equation — the so-called soft Bellman equation — which allows for a straightforward extension of the value-iteration algorithm to this problem [Ziebart 2010; Ziebart et al. 2013].

As we now know the policy πME(ajsi,ω)\pi_{\text{ME}}(a_j \mid s_i, \omega), inferring the state-visitation frequency DsiD_{s_i} becomes fairly easy: Remember that the policy is a probability distribution that tells us, given a state sis_i, with which probability the agent should take a certain action aja_j, which then leads to a new state sks_k according to our MDP transition model. Given an initial state-visitation frequency, we can therefore use the policy to forward-propagate the individual visitation counts, obtaining an updated state-visitation frequency map.

To this end, we initialize Dsi,0D_{s_i, 0} to the starting-state probabilities p0(si)p_0(s_i) at timestep t=0t=0, i.e.,

Dsi,0=p0(si)=p(τ:siτt=0).(7a) \label{eq:fwpass} D_{s_i, 0} = p_0(s_i) = p(\tau : s_i \in \tau_{t=0}). \tag{7a}

We then compute updates as mentioned above via

Dsk,t+1=siajDsi,tπME(ajsi,ω)p(sksi,aj),(7b) D_{s_k, t+1} = \sum_{s_i} \sum_{a_j} D_{s_i, t} \cdot \pi_{\text{ME}} (a_j \mid s_i, \omega) \cdot p(s_k \mid s_i, a_j), \tag{7b}

where p(skaj,si)p(s_k \mid a_j, s_i) represents the transition probabilities of our (stochastic) MDP. Note, that Dsi,tD_{s_i, t} only gives us the state-visitation frequency for time-step tt, meaning we have to sum them up to obtain the time-step independent state-visitation

Dsi=tDsi,t.(7c) D_{s_i} = \sum_{t} D_{s_i, t}. \tag{7c}

With a plan for computation of the state-visitation frequency, we now have everything we need to state the full algorithm. To derive that, however, let us first recap our findings:

  • Our problem is a constrained optimization problem for finding the optimal reward parameters ω\omega. We can solve this problem by maximizing the log-likelihood over all demonstrated trajectories D\mathcal{D} (see Equation (4)). This, in turn, is itself an optimization problem, for which we need the gradient L(ω)\nabla \mathcal{L}(\omega) of this log-likelihood.

  • To compute the gradient L(ω)\nabla \mathcal{L}(\omega), we require the state-visitation frequency (see Equation (5b)).

  • To compute the state-visitation frequency, we require the policy πME(ajsi,ω)\pi_{\text{ME}} (a_j \mid s_i, \omega) (see Equation (7b)).

Putting this all together yields the following iteration scheme:

  1. Compute policy πME(ajsi,ω)\pi_{\text{ME}} (a_j \mid s_i, \omega) the using current estimate of ω\omega via the backward pass, i.e., Equations (6a–c).

  2. Compute the state-visitation frequency DsiD_{s_i} via the forward pass, i.e., Equations (7a–c).

  3. Compute the gradient L(ω)\nabla \mathcal{L}(\omega) via Equation (5b).

  4. Perform one gradient-based optimization step, e.g.,

    ωω+ηL(ω).\omega \gets \omega + \eta \nabla \mathcal{L}(\omega).

We then iterate these four steps until convergence.

Note: You can find a Jupyter Notebook discussing a simple implementation of this algorithm (applied to a grid-world example) at github.com/qzed/irl-maxent.

Maximum entropy IRL is a clever and ― all in all ― fairly simple solution to the general inverse reinforcement learning problem. However, it does have some drawbacks and limitations:

First and foremost, the algorithm as explained in this post requires us “solving” the MDP once per iteration. As a result, it is fairly limited in the size of the discrete state-action-spaces that it can (reasonably) be applied to. Extensions to continuous state spaces, for example via Monte-Carlo sampling or via path integrals, have been proposed [see e.g., Aghasadeghi and Bretl 2011; Kalakrishnan et al. 2013]. However, the assumption that the transition dynamics of the environment, i.e., the probability distribution p(st+1st,at)p(s_{t+1} \mid s_t, a_t), are known, further limits its applicability.

It also assumes a linear parametrization of the reward R(s)=ωϕ(s)R(s) = \omega^\intercal \phi(s). This can be somewhat mitigated by our choice of features ϕ(s)\phi(s), but feature expectation matching also only guarantees performance-parity with a linear reward parametrization. An extension to model nonlinear reward functions via neural networks has been suggested by Wulfmeier et al. [2016].

Lastly, maximum entropy IRL is constrained to limited transition randomness, i.e., fairly deterministic transition dynamics p(st+1st,at)p(s_{t+1} \mid s_t, a_t). As noted above, this is due to the reward bias introduced by our probability approximation and can be avoided via the maximum causal entropy IRL approach, proposed subsequently by Ziebart [Ziebart 2010; Ziebart et al. 2013].

A somewhat similar, yet model-free approach is relative entropy IRL by Boularias et al. [2011]. See the paper by Snoswell et al. [2020] for a perspective on both relative and maximum entropy IRL approaches.

Abbeel, P. and Ng, A.Y. 2004. Apprenticeship learning via inverse reinforcement learning. Twenty-first international conference on Machine learning - ICML ’04, ACM Press.
Aghasadeghi, N. and Bretl, T. 2011. Maximum entropy inverse reinforcement learning in continuous state spaces with path integrals. 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, 1561–1566.
Bishop, C.M. 2006. Pattern Recognition and Machine Learning. Springer, New York.
Boularias, A., Kober, J., and Peters, J. 2011. Relative Entropy Inverse Reinforcement Learning. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, JMLR Workshop and Conference Proceedings, 182–189.
Jaynes, E.T. 1957. Information Theory and Statistical Mechanics. Physical Review 106, 4, 620–630.
Kalakrishnan, M., Pastor, P., Righetti, L., and Schaal, S. 2013. Learning objective functions for manipulation. 2013 IEEE International Conference on Robotics and Automation, 1331–1336.
Ng, A.Y., Harada, D., and Russell, S.J. 1999. Policy Invariance Under Reward Transformations: Theory and Application to Reward Shaping. Proceedings of the Sixteenth International Conference on Machine Learning, Morgan Kaufmann Publishers Inc., 278–287.
Osa, T., Pajarinen, J., Neumann, G., Bagnell, J.A., Abbeel, P., and Peters, J. 2018. An Algorithmic Perspective on Imitation Learning. Foundations and Trends in Robotics 7, 1–2, 1–179.
Snoswell, A.J., Singh, S.P.N., and Ye, N. 2020. Revisiting Maximum Entropy Inverse Reinforcement Learning: New Perspectives and Algorithms. 2020 IEEE Symposium Series on Computational Intelligence (SSCI), 241–249.
Wulfmeier, M., Ondruska, P., and Posner, I. 2016. Maximum Entropy Deep Inverse Reinforcement Learning. http://arxiv.org/abs/1507.04888.
Ziebart, B.D. 2010. Modeling purposeful adaptive behavior with the principle of maximum causal entropy. https://www.cs.cmu.edu/~bziebart/publications/thesis-bziebart.pdf.
Ziebart, B.D., Bagnell, J.A., and Dey, A.K. 2013. The Principle of Maximum Causal Entropy for Estimating Interacting Processes. IEEE Transactions on Information Theory, IEEE, 1966–1980.
Ziebart, B.D., Maas, A., Bagnell, J.A., and Dey, A.K. 2008. Maximum Entropy Inverse Reinforcement Learning. Proceedings of the 23rd National Conference on Artificial Intelligence - Volume 3, AAAI Press, 1433–1438.