June 27, 2025
We introduce a new framework for time series modeling that approximates optimal performance in both forecasting and inference. Implementations are available in both R and Python. We will present the underlying statistical and software design principles—what we call a grammar for optimal time series model design. Traditional models such as ARIMA and Exponential Smoothing rely on estimation techniques like Maximum Likelihood, which do not directly optimize for forecast error or bias. This disconnect is even more pronounced in tasks such as model selection and model combination, where optimality is rarely addressed.
Our framework leverages neural networks trained on simulations of time series models to act as near-optimal estimators and forecasters tailored to specific optimality criteria. A suite of pre-trained networks is provided as drop-in replacements for classical estimators commonly found in statistical software. These networks, even when approximating simple statistical models, achieve state-of-the-art performance on major time series benchmark datasets—outperforming both traditional estimators and more complex neural architectures, including foundation models.
The framework is highly extensible, making it easy to adapt to non-Gaussian distributions, complex nonlinear dynamics, censoring, time-varying parameters, and other scenarios that are easy to simulate but intractable analytically. Importantly, the entire framework is fully open: all model weights, architectures, training data, and pipelines are open-sourced and designed to be trainable on consumer-grade hardware.