Inference
inference
Inference components (models, likelihoods, proposals, sampling, results).
Classes
AdaptiveSubchain
dataclass
AdaptiveSubchain(state, control)
Adaptive subchain policy for DA-MCMC guided active learning.
AdaptiveSubchain implements the hook interface
AdaptiveHook used by
ActiveMCMCModel.
Semantics
- On each coarse call: record the current subchain length.
- On each fine call:
- compute and store LF-HF error (currently RMSE in observation space),
- update the subchain length if the update schedule triggers,
- increment HF counters.
Intended usage
This policy is intended to be used with:
- two posteriors
[coarse, fine](DA-MCMC is mandatory), and - a chunked sampler such as
sample_adaptive_active_chain,
because
tinyDArequires a fixed subsampling rate within each call.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
state
|
AdaptiveSubchainState
|
Mutable adaptive state (current length, histories, counters). |
required |
control
|
AdaptiveSubchainControl
|
Control parameters (targets, bounds, and update schedule). |
required |
See Also
ActiveMCMCModel
The active model that triggers on_coarse_call / on_fine_call.
ChunkedMCMCConfig
Chunk configuration used by the adaptive sampler.
Functions
on_coarse_call
on_coarse_call()
Hook called by the active model at the start of a coarse evaluation.
on_fine_call
on_fine_call(*, y_hf, y_lf)
Hook called by the active model during fine evaluation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
y_hf
|
FloatArray
|
HF output at the current parameter value. |
required |
y_lf
|
FloatArray
|
LF predictive mean at the current parameter value (before surrogate update). |
required |
AdaptiveSubchainControl
dataclass
AdaptiveSubchainControl(update_every=10, target_error=0.01, min_subchain=1, max_subchain=10000, grow_factor=2.0, shrink_factor=0.5)
Hyperparameters for adaptive subchain-length control.
This control block specifies how the adaptive policy reacts to observed LF-HF discrepancy during sampling.
In DA-MCMC guided active learning, the sampler periodically performs a fine (HF) correction. The number of coarse steps taken between fine corrections is the subchain length (often called the subsampling rate).
The update rule is:
- if the most recent LF-HF error is above
target_error, decrease the subchain length (more frequent HF corrections); - if the error is below
target_error, increase the subchain length (less frequent HF corrections).
Updates are attempted every update_every HF evaluations.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
update_every
|
int
|
Number of HF evaluations between policy updates. Must be positive. |
10
|
target_error
|
float
|
Target level for the LF-HF error statistic (non-negative). |
0.01
|
min_subchain
|
int
|
Lower bound on the subchain length (minimum spacing between HF corrections is 1). |
1
|
max_subchain
|
int
|
Upper bound on the subchain length. |
10000
|
grow_factor
|
float
|
Multiplicative factor used when error is below target. Must be > 1. |
2.0
|
shrink_factor
|
float
|
Multiplicative factor used when error is above target. Must be in (0, 1). |
0.5
|
Notes
The default values are tuned for toy problems. In real applications, target_error
should reflect acceptable surrogate error in the observation space used by the likelihood.
See Also
AdaptiveSubchain Adaptive policy that uses this control block. ActiveMCMCModel The active model that calls the policy hooks during coarse/fine evaluations.
AdaptiveSubchainState
dataclass
AdaptiveSubchainState(subchain_length=10, subchain_history=list(), hf_errors=list(), total_hf_steps=0, _hf_since_update=0)
Mutable state for adaptive subchain control.
This object stores the current subchain length and the histories needed for diagnostics and adaptation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
subchain_length
|
int
|
Current subchain length (number of coarse steps between HF corrections). Must be positive. |
10
|
Attributes:
| Name | Type | Description |
|---|---|---|
subchain_history |
list[int]
|
Records the subchain length at each coarse evaluation (aligned with coarse calls). |
hf_errors |
list[float]
|
LF-HF error values computed at each fine evaluation. |
total_hf_steps |
int
|
Total number of HF evaluations performed. |
_hf_since_update |
int
|
Internal counter tracking HF evaluations since the last update. |
Notes
The error statistic used here is RMSE between the LF predictive mean and the HF output in observation space:
See Also
AdaptiveSubchainControl Control parameters used to update the subchain length.
Functions
append_length
append_length()
Record the current subchain length.
This should be called exactly once per coarse evaluation so that
subchain_history remains aligned with coarse calls.
step
step()
Advance HF counters.
This should be called exactly once per fine evaluation.
append_error
append_error(lf_mean, y_hf)
Compute and record an LF-HF error statistic (RMSE).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
lf_mean
|
FloatArray
|
LF predictive mean at the current parameter value (before surrogate update). |
required |
y_hf
|
FloatArray
|
HF model output at the current parameter value. |
required |
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
update_subchain
update_subchain(control)
Update the subchain length according to control.
The update is performed only when:
- at least
control.update_everyHF evaluations have occurred since the last update, and - at least one error value is available.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
control
|
AdaptiveSubchainControl
|
Policy hyperparameters controlling update frequency, bounds, and scaling. |
required |
ChainExtras
dataclass
ChainExtras(used_hf=None, accepted=None, subchain_length=None)
Per-step metadata aligned with an MCMCChain.
ChainExtras stores optional arrays aligned one-to-one with the sample matrix in
MCMCChain. Keeping these fields separate
makes the core chain representation predictable while still supporting diagnostics.
Attributes:
| Name | Type | Description |
|---|---|---|
used_hf |
BoolArray | None
|
Boolean array of length |
accepted |
BoolArray | None
|
Boolean array of length |
subchain_length |
IntArray | None
|
Integer array of length |
Functions
slice
slice(sl)
Return a sliced view of extras.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sl
|
slice
|
Slice to apply. |
required |
Returns:
| Type | Description |
|---|---|
extras
|
New |
MCMCChain
dataclass
MCMCChain(samples, extras=ChainExtras())
Immutable container for MCMC samples and aligned diagnostics.
The chain is represented as:
samples: a 2D array of shape(n_steps, n_dim)extras: optional per-step diagnostics aligned with samples
The class provides lightweight post-processing utilities (burn-in removal, thinning, and summary statistics) without mutating the original object.
Attributes
n_steps
property
n_steps
Number of MCMC steps (rows of samples).
n_dim
property
n_dim
Parameter dimension (columns of samples).
Functions
from_arrays
classmethod
from_arrays(*, samples, used_hf=None, accepted=None, subchain_length=None)
Construct an MCMCChain from raw arrays.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
samples
|
ArrayLike
|
Sample matrix of shape |
required |
used_hf
|
ArrayLike | None
|
Optional boolean vector of length |
None
|
accepted
|
ArrayLike | None
|
Optional boolean vector of length |
None
|
subchain_length
|
ArrayLike | None
|
Optional integer vector of length |
None
|
Returns:
| Type | Description |
|---|---|
chain
|
A validated |
burn_in
burn_in(burn_in=0)
Drop the first burn_in samples and return a new chain.
thin
thin(thin=1)
Thin the chain by keeping every thin-th sample.
summary
summary(*, theta_true=None, burn_in=0)
Compute lightweight diagnostic summary statistics.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
theta_true
|
ArrayLike | None
|
Optional reference parameter vector. If provided, the RMSE between the posterior
estimate and |
None
|
burn_in
|
int
|
Burn-in used only for the posterior RMSE computation. |
0
|
Returns:
| Type | Description |
|---|---|
summary
|
Dictionary of summary metrics. |
SamplingResult
dataclass
SamplingResult(chain, metadata=dict())
Return type for sampling entrypoints.
Attributes:
| Name | Type | Description |
|---|---|---|
chain |
MCMCChain
|
The resulting MCMC chain. |
metadata |
dict[str, Any]
|
Run metadata (configuration and bookkeeping). Intended to be lightweight and JSON-serialisable. |
CoarseOutput
Bases: ndarray[Any, dtype[float64]]
Prediction container for LF (surrogate) evaluations.
CoarseOutput behaves like a NumPy array containing the predictive mean, while
attaching a .variance attribute that stores marginal (pointwise) predictive
variance aligned with that mean.
The class exists to keep the LF path compatible with array-oriented APIs (notably
tinyDA forward models and likelihoods), without discarding uncertainty information
produced by the surrogate.
Typical use in this library
ActiveMCMCModel.coarsereturns aCoarseOutput(mean, variance)when the LF surrogate is used.ActiveGPLogLikedetects the.varianceattribute and inflates the observation covariance with a diagonal termdiag(variance).
Interpretation
The stored variance is interpreted as marginal variance in observation space:
mean[i]is the predicted mean for observation componentivariance[i]is the predictive variance for the same component
Notes
- This is a lightweight container: it stores marginal variances only. Output correlations are not represented.
variancemust be non-negative and have the same shape as the mean.
Attributes:
| Name | Type | Description |
|---|---|---|
variance |
FloatArray | None
|
Marginal (pointwise) variance aligned with the mean. |
Examples:
>>> y = CoarseOutput(mean=np.array([1.0, 2.0]), variance=np.array([0.1, 0.2]))
>>> np.asarray(y)
array([1., 2.])
>>> y.variance
array([0.1, 0.2])
See Also
ActiveMCMCModel
Coupled LF/HF model that returns CoarseOutput on the coarse path.
ActiveGPLogLike
Likelihood that uses CoarseOutput.variance to inflate the observation covariance.
Functions
__new__
__new__(mean, variance)
Create a CoarseOutput from mean and marginal variance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mean
|
ndarray[Any, dtype[float64]]
|
Predictive mean array. |
required |
variance
|
ndarray[Any, dtype[float64]]
|
Predictive marginal variance array (same shape as |
required |
Returns:
| Type | Description |
|---|---|
out
|
Array-like object whose values are the predictive mean and with an attached
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
__array_finalize__
__array_finalize__(obj)
Propagate .variance when NumPy creates a new view.
NumPy calls __array_finalize__ for view-casting and slicing operations.
This method preserves the variance attribute when the new array is created
from an existing CoarseOutput.
require_variance
require_variance()
Return variance, raising if missing.
Useful when you want a non-optional array at call sites.
ActiveGPLogLike
Bases: AdaptiveGaussianLogLike
Gaussian log-likelihood with surrogate-variance inflation.
ActiveGPLogLike is designed for inference workflows where the forward model may
return either:
- High-fidelity (HF) output: a 1D mean prediction
y, or - Low-fidelity (LF) surrogate output: an array-like mean prediction that also
exposes a pointwise predictive variance vector via a
.varianceattribute (for exampleCoarseOutput).
If a predictive variance vector is present, the likelihood inflates the observation covariance using a diagonal term:
Functions
loglike
loglike(y_pred)
Evaluate the Gaussian log-likelihood for a prediction.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
y_pred
|
Any
|
Model prediction. Either:
|
required |
Returns:
| Type | Description |
|---|---|
loglike
|
Log-likelihood value. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If shapes are inconsistent with the observed data, or if a provided variance vector is negative or mismatched. |
ActiveMCMCModel
dataclass
ActiveMCMCModel(lf_model, hf_model, gamma_threshold, log=EvaluationLog(), adaptive=None)
Couple a low-fidelity surrogate (LF) with a high-fidelity model (HF).
ActiveMCMCModel is the core component
of the library. It implements the LF/HF coupling required by active-learning MCMC workflows
and exposes two callables intended to be used in MCMC posteriors:
ActiveMCMCModel.coarse— LF-first evaluation with an uncertainty trigger that may fall back to HF.ActiveMCMCModel.fine— HF evaluation (always) and surrogate update.
In practice, users typically:
- Build an LF surrogate (e.g., POD-GP).
- Wrap LF + HF in an
ActiveMCMCModel. - Choose an inference mode by deciding which posterior(s) to pass to a sampler.
- Run a sampler and analyse both samples and HF-usage diagnostics.
Choosing the inference mode
The posterior argument determines how the sampler interacts with the active model.
Single posterior (MCMC-guided active learning)
Pass a single posterior using model.coarse as the forward model. HF calls happen
internally whenever the uncertainty trigger activates.
- `posterior = Posterior(prior, loglike, model.coarse)`
- sampler:
[`sample_active_chain`][gp_active_mcmc.inference.sampling.sample_active_chain]
Two posteriors (DA-MCMC guided active learning) Pass two posteriors: coarse (LF-first) and fine (HF). This corresponds to delayed-acceptance MCMC (DA-MCMC).
- `posterior = [Posterior(..., model.coarse), Posterior(..., model.fine)]`
- sampler:
[`sample_active_chain`][gp_active_mcmc.inference.sampling.sample_active_chain]
Adaptive DA-MCMC (recommended)
Use DA-MCMC (two posteriors) and pass an adaptive subchain policy via adaptive=....
The adaptive policy monitors LF-HF discrepancy and adjusts how often fine (HF)
corrections are applied.
- DA-MCMC is mandatory: you must use two posteriors.
- adaptive policy:
[`AdaptiveSubchain`][gp_active_mcmc.inference.adaptive_subchain.AdaptiveSubchain]
- sampler:
[`sample_adaptive_active_chain`][gp_active_mcmc.inference.sampling.sample_adaptive_active_chain]
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
lf_model
|
ActiveSurrogate
|
Low-fidelity surrogate implementing:
|
required |
hf_model
|
HighFidelityModel
|
High-fidelity forward model callable as |
required |
gamma_threshold
|
float
|
Uncertainty threshold used by |
required |
log
|
EvaluationLog
|
Evaluation log used to record HF usage aligned with coarse evaluations.
See |
EvaluationLog()
|
adaptive
|
AdaptiveHook | None
|
Optional adaptive hook (e.g., adaptive subchain logic). When provided, the hook is notified
during coarse and fine evaluations. See |
None
|
Returns and types
coarsereturns either:- a
CoarseOutputif LF is used, or - a 1D numpy array (HF output) if HF is triggered.
finealways returns a 1D numpy array (HF output).
Notes
The uncertainty trigger in coarse is intentionally simple and cheap: it uses the mean predictive
variance over outputs as a scalar criterion.
See Also
ActiveGPLogLike
Likelihood that inflates observation covariance when CoarseOutput.variance is present.
ChunkedMCMCConfig
Chunking configuration required for adaptive subchain sampling.
Functions
coarse
coarse(theta)
Evaluate the coupled model in LF-first (coarse) mode.
Workflow
- Compute LF predictive mean and variance at
theta. - If LF uncertainty is large (
mean(var) > gamma_threshold**2), evaluate HF. - If HF was used, update the LF surrogate with
(theta, y_hf). - Record HF usage in
log.used_hf.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
theta
|
ArrayLike
|
Parameter vector of shape |
required |
Returns:
| Type | Description |
|---|---|
out
|
If LF is used, returns a If HF is triggered, returns the HF output as a 1D numpy array of shape |
Raises:
| Type | Description |
|---|---|
ValueError
|
If the surrogate returns mean/variance arrays with inconsistent shapes. |
See Also
ActiveGPLogLike
Uses CoarseOutput.variance to inflate the observation covariance.
fine
fine(theta, *, replace_last=True)
Evaluate the coupled model in HF (fine) mode and update the surrogate.
This method always evaluates the HF model and then updates the LF surrogate. It is typically used as the fine level in DA-MCMC, either periodically or according to an adaptive subchain policy.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
theta
|
ArrayLike
|
Parameter vector of shape |
required |
replace_last
|
bool
|
If True, replace the most recent entry in |
True
|
Returns:
| Type | Description |
|---|---|
y_hf
|
HF model output as a 1D numpy array of shape |
See Also
AdaptiveSubchain
Adaptive policy notified during fine evaluations (LF-HF discrepancy monitoring).
AdaptiveHook
Bases: Protocol
Callback interface for adaptive policies (e.g., adaptive subchains).
ActiveMCMCModel is responsible for
coupling LF and HF. Adaptive logic (such as choosing a changing subchain length) is
expressed as an external hook so that the active model remains small and testable.
The hook receives notifications during model evaluations:
on_coarse_callis called at the start ofActiveMCMCModel.coarse.on_fine_callis called insideActiveMCMCModel.fine, before updating the surrogate with the new HF observation.
Notes
The hook is deliberately narrow: it observes what happened and updates its own state; it does not perform I/O and it does not change the model outputs directly.
See Also
AdaptiveSubchain
Default adaptive policy provided by the library.
Functions
on_coarse_call
on_coarse_call()
Called at the start of a coarse evaluation.
on_fine_call
on_fine_call(*, y_hf, y_lf)
Called during fine evaluation before updating the surrogate.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
y_hf
|
FloatArray
|
HF model output at the current |
required |
y_lf
|
FloatArray
|
LF predictive mean at the current |
required |
EvaluationLog
dataclass
EvaluationLog(used_hf=list())
Minimal evaluation log for ActiveMCMCModel.
The active model records whether each coarse evaluation used the HF model. This metadata is useful for diagnostics (HF fraction, where corrections occur) and for sampler bookkeeping.
Notes
The log is aligned with calls to
ActiveMCMCModel.coarse.
If a sampler performs a fine correction after a coarse step at the same MCMC iteration,
the fine step can overwrite the last entry via replace_last.
Attributes:
| Name | Type | Description |
|---|---|---|
used_hf |
list[bool]
|
Boolean flag per coarse evaluation:
|
Functions
append
append(used_hf)
Append a boolean HF-usage flag.
replace_last
replace_last(used_hf)
Replace the most recent HF-usage flag.
If the log is empty, this method appends a new entry. This behaviour
is convenient for samplers that call ActiveMCMCModel.fine
as a correction following a prior coarse evaluation in the same MCMC step.
AdaptiveMetropolisShared
AdaptiveMetropolisShared(*args, share_across_deepcopy=True, **kwargs)
Bases: AdaptiveMetropolis
Adaptive Metropolis proposal with controlled deep-copy semantics.
This proposal extends tinyDA.AdaptiveMetropolis by
defining what happens when the object is deep-copied.
Why this matters
Some sampling workflows deep-copy the proposal (explicitly or implicitly), for example:
- chunked sampling, where the sampler is re-entered multiple times, and
- certain multi-chain patterns.
For adaptive proposals, deep-copying changes the algorithmic behaviour:
- Shared state: adaptation continues across chunks as if there were a single proposal. This is typically what you want for chunked active sampling.
- Independent state: each deepcopy adapts independently, which is appropriate only when you truly want independent proposals (e.g., fully independent chains).
In this library, shared state is often the default because
sample_adaptive_active_chain
runs tinyDA repeatedly in chunks and we want a single evolving proposal.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
*args
|
Any
|
Passed through to |
()
|
**kwargs
|
Any
|
Passed through to |
()
|
share_across_deepcopy
|
bool
|
Controls deepcopy behaviour:
|
True
|
Notes
Returning self from __deepcopy__ is a deliberate deviation from standard Python
semantics. It preserves adaptation history in workflows where deepcopies are an
implementation detail rather than a user intent.
See Also
sample_adaptive_active_chain
Chunked sampler where shared proposal state is typically desired.
ChunkedMCMCConfig
Configuration controlling chunk size for adaptive sampling.
Examples:
Shared adaptation across chunks:
>>> prop = AdaptiveMetropolisShared(C0=C0, period=100, share_across_deepcopy=True)
Independent adaptation (useful for truly independent chains):
>>> prop = AdaptiveMetropolisShared(C0=C0, period=100, share_across_deepcopy=False)
Functions
__deepcopy__
__deepcopy__(memo)
Deep-copy the proposal according to share_across_deepcopy.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
memo
|
dict[int, object]
|
Standard deepcopy memo dictionary used to preserve object identity and avoid infinite recursion. |
required |
Returns:
| Type | Description |
|---|---|
proposal
|
If |
ChunkedMCMCConfig
dataclass
ChunkedMCMCConfig(chain_key, chunk_size=500)
Configuration for chunked sampling.
Chunked sampling is used when an algorithm needs to periodically re-enter tinyDA
(i.e., call tda.sample multiple times) rather than running one long
sampling call.
In this library, chunking is primarily used to support
AdaptiveSubchain:
the subsampling rate (coarse steps per fine correction) can change between chunks.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
chain_key
|
str
|
Key used by |
required |
chunk_size
|
int
|
Budget per chunk measured in coarse evaluation units. A "coarse evaluation unit" corresponds to one LF-first evaluation in the active model
(one step in the coarse chain). In adaptive workflows we treat this as the primary
computational budget and derive |
500
|
See Also
sample_adaptive_active_chain
Chunked sampler that uses this configuration.
Functions
sample_active_chain
sample_active_chain(*, model, posterior, proposal, iterations, initial_parameters, subsampling_rate, chain_key, n_chains=1, force_sequential=True, store_coarse_chain=True)
Run Active-(DA)-MCMC with a fixed subsampling rate (single tinyDA call).
This is the main entrypoint when the subsampling rate is fixed throughout sampling.
Interpretation of posterior
The posterior argument determines the algorithmic mode:
- If
posterioris a singletinyDA.Posterior, the run corresponds to MCMC-guided active learning (single level). - If
posterioris a list of two posteriors[coarse, fine], the run corresponds to delayed-acceptance MCMC (DA-MCMC) guided active learning.
In both cases, subsampling_rate controls the frequency of the fine correction:
roughly, the fine posterior is evaluated every subsampling_rate coarse steps
(depending on tinyDA's internal delayed-acceptance implementation).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
Any
|
Active model used during sampling. After sampling it is queried to extract HF usage
flags for diagnostics. In typical workflows this is an
|
required |
posterior
|
Posterior | list[Posterior]
|
Either a single posterior (single-level) or a list of two posteriors |
required |
proposal
|
Proposal
|
Proposal passed to |
required |
iterations
|
int
|
Number of |
required |
initial_parameters
|
ArrayLike
|
Initial parameter vector. |
required |
subsampling_rate
|
int
|
Fine-correction frequency. Must be positive. |
required |
chain_key
|
str
|
Chain key used by |
required |
n_chains
|
int
|
Number of chains to run (passed to |
1
|
force_sequential
|
bool
|
If True, force sequential execution (useful for reproducibility). |
True
|
store_coarse_chain
|
bool
|
If True, store the coarse chain (when supported by |
True
|
Returns:
| Type | Description |
|---|---|
result
|
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
See Also
sample_adaptive_active_chain
Chunked sampler used when the subsampling rate changes over time.
sample_adaptive_active_chain
sample_adaptive_active_chain(*, model, posterior, proposal, n_coarse_evals, initial_parameters, chain_key, config, n_chains=1, force_sequential=True, store_coarse_chain=True)
Run adaptive DA-MCMC guided active learning using chunked sampling.
This entrypoint supports the recommended workflow:
- DA-MCMC (two posteriors: coarse + fine), and
- an adaptive policy such as
AdaptiveSubchainattached to the active model.
Why chunking is needed
In adaptive runs, the subsampling rate may change over time (because the subchain length
is adapted online). Since tinyDA.sample takes a fixed subsampling_rate
per call, we run multiple shorter calls ("chunks") and update the subsampling rate between
chunks.
Budgeting
The overall budget is expressed as a total number of coarse evaluation units
(n_coarse_evals). Each chunk consumes up to config.chunk_size coarse evaluations.
Requirements
- The adaptive workflow requires
model.adaptive.state.subchain_length. - DA-MCMC is mandatory in this mode:
posteriorshould be[coarse, fine].
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
Any
|
Active model with an adaptive policy. Must expose |
required |
posterior
|
Posterior | list[Posterior]
|
Two-level posterior list |
required |
proposal
|
Proposal
|
Proposal passed to |
required |
n_coarse_evals
|
int
|
Total budget in coarse evaluation units. |
required |
initial_parameters
|
ArrayLike
|
Initial parameter vector. |
required |
chain_key
|
str
|
Chain key used by |
required |
config
|
ChunkedMCMCConfig
|
Chunking configuration ( |
required |
n_chains
|
int
|
Number of chains. Currently only |
1
|
force_sequential
|
bool
|
If True, force sequential execution. |
True
|
store_coarse_chain
|
bool
|
If True, store the coarse chain. |
True
|
Returns:
| Type | Description |
|---|---|
result
|
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If budgets are non-positive, if |
See Also
ChunkedMCMCConfig
Chunk configuration controlling chunk_size.
ActiveMCMCModel
Active model that provides coarse and fine callables for the two posteriors.