A recent line of work has shown a surprising connection between multicalibration, a multi-group fairness notion, and omniprediction, a learning paradigm that provides simultaneous loss minimization guarantees for a large family of loss functions. Prior work studies omniprediction in the batch setting. We initiate the study of omniprediction in the online adversarial setting. Although there exist algorithms for obtaining notions of multicalibration in the online adversarial setting \cite{gupta2021onlinevalid}, unlike batch algorithms, they work only for small finite classes of benchmark functions $\cF$, because they require enumerating every function $f \in \cF$ at every round. In contrast, omniprediction is most interesting for learning theoretic \emph{hypothesis classes} $\cF$, which are generally continuously (or at least exponentially) large.

We develop a new online multicalibration algorithm that is well defined for infinite benchmark classes $\cF$ (e.g. the set of all linear functions), and is oracle efficient --- i.e. for any class $\cF$, the algorithm has the form of an efficient reduction to a no-regret learning algorithm for $\cF$. The result is the first efficient online omnipredictor --- an oracle efficient prediction algorithm that can be used to simultaneously obtain no regret guarantees to all Lipschitz convex loss functions. For the class $\cF$ of linear functions, we show how to make our algorithm efficient in the worst case (i.e. the ``oracle'' that we need is itself efficient even in the worst case). We show how our results extend beyond mean multicalibration to quantile multicalibration, with applications to oracle efficient multivalid conformal prediction. Finally, we show upper and lower bounds on the extent to which our rates can be improved: our oracle efficient algorithm actually promises a stronger guarantee called ``swap-omniprediction'', and we prove a lower bound showing that obtaining $O(\sqrt{T})$ bounds for swap-omniprediction is impossible in the online setting. On the other hand, we give a (non-oracle efficient) algorithm which can obtain the optimal $O(\sqrt{T})$ omniprediction bounds without going through multicalibration, giving an information theoretic separation between these two solution concepts. We leave the problem of obtaining $O(\sqrt{T})$ omniprediction bounds in an oracle efficient manner as our main open problem.