Estimators¶
- class lgbn.estimators.GreedyEquivalentSearch(score=None, max_iter=1000000, eps=1e-09)¶
Greedy Equivalent Search structure learning algorithm.
The Greedy Equivalent algorithm learns the structure of a Bayesian network that maximizes the given score. The search procedure starts with an empty graph. Edges are added until no more increase the score and then removed until no further operation increases the score. Equality of operations and thus networks is defined by equivalence classes. An equivalence class contains all networks which have the same edges regardless of orientation. This algorithm is reasonably fast when used with a decomposable score which can be cached.
See 3 for a detailed description of the Greedy Equivalent Search algorithm.
Note
This implementation requires a decomposable score, although there exist other implementations that work with non-decomposable scores.
References
- 3
D. M. Chickering, “Optimal Structure Identification With Greedy Search,” Journal of Machine Learning Research, vol. 3, no. Nov 2002, p. 48, Nov. 2002.
- get_params(deep=True)¶
Get parameters for this estimator.
- Parameters
deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns
params – Parameter names mapped to their values.
- Return type
dict
- search()¶
Search the space of posible models for the one that maximizes the score of this estimator.
- set_params(**kwargs)¶
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline
). The latter have parameters of the form<component>__<parameter>
so that it’s possible to update each component of a nested object.- Parameters
**params (dict) – Estimator parameters.
- Returns
self – Estimator instance.
- Return type
estimator instance
- class lgbn.estimators.GreedyHillClimbing(score=None, start_net=None, max_iter=1000000, eps=1e-09, random_state=None)¶
Greedy Hill Climbing structure search algorithm.
The Greedy Hill Climbing algorithm learns the structure of a Bayesian network that maximizes the given score. The search procedure starts with an initial network, which defaults to a fully disconnected network. Edges are added, removed or have their direction reversed one at a time until no more modifications increase the overall score of the network. This algorithm is reasonably fast when used with a decomposable score which can be cached.
See p. 40 in 2 for a detailed description of the Greedy Hill Climbing algorithm. The source refers to Greedy Hill Climbing as Max-Min Hill Climbing.
Note
This implementation requires a decomposable score, although there exist other implementations that work with non-decomposable scores.
References
- 2
I. Tsamardinos, L. E. Brown, and C. F. Aliferis, “The max-min hill-climbing Bayesian network structure learning algorithm,” Mach Learn, vol. 65, no. 1, pp. 31–78, Oct. 2006, doi: 10.1007/s10994-006-6889-7.
- get_params(deep=True)¶
Get parameters for this estimator.
- Parameters
deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns
params – Parameter names mapped to their values.
- Return type
dict
- search()¶
Search the space of posible models for the one that maximizes the score of this estimator.
- set_params(**kwargs)¶
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline
). The latter have parameters of the form<component>__<parameter>
so that it’s possible to update each component of a nested object.- Parameters
**params (dict) – Estimator parameters.
- Returns
self – Estimator instance.
- Return type
estimator instance
- class lgbn.estimators.K2Search(score=None, ordering=None, eps=1e-09)¶
K2 structure learning algorithm.
The K2 algorithm learns the structure of a Bayesian network that maximizes the given score. The search procedure is guided by a given topological ordering of the network. In that ordering, if node x comes before node y, then node y can never be a parent of node x. This vastly reduces the search space resulting in a significant speedup, even without using caching.
See 1 for a detailed description of the K2 algorithm.
Note
This implementation requires a decomposable score, although there exist other implementations that work with non-decomposable scores.
References
- 1
G. F. Cooper and E. Herskovits, “A Bayesian method for the induction of probabilistic networks from data,” Mach Learn, vol. 9, no. 4, pp. 309–347, Oct. 1992, doi: 10.1007/BF00994110.
- get_params(deep=True)¶
Get parameters for this estimator.
- Parameters
deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns
params – Parameter names mapped to their values.
- Return type
dict
- search()¶
Search the space of posible models for the one that maximizes the score of this estimator.
- set_params(**kwargs)¶
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline
). The latter have parameters of the form<component>__<parameter>
so that it’s possible to update each component of a nested object.- Parameters
**params (dict) – Estimator parameters.
- Returns
self – Estimator instance.
- Return type
estimator instance
- class lgbn.estimators.ScoreSearchEstimator(score=None, eps=1e-09)¶
A structure estimator using score-based search.
Note
This class is just a general interface, it cannot actually be used.
See also
- eps: float = 1e-09¶
Tolerance for equality testing of numeric values. Two values a and b are equal if
abs(a - b) < eps
.
- fit(data)¶
Fit the model to the given data.
- Parameters
data (pandas.DataFrame) – A DataFrame with one row per observation and one column per variable. Column names will be used for node identifiers in the resulting model.
- Return type
A fitted estimator (self).
- get_params(deep=True)¶
Get parameters for this estimator.
- Parameters
deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns
params – Parameter names mapped to their values.
- Return type
dict
- model: BayesianNetwork = None¶
The model resulting from the estimation.
- property score¶
Score instance to use for scoring networks in the search procedure.
- search() BayesianNetwork ¶
Search the space of posible models for the one that maximizes the score of this estimator.
- set_params(**kwargs)¶
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline
). The latter have parameters of the form<component>__<parameter>
so that it’s possible to update each component of a nested object.- Parameters
**params (dict) – Estimator parameters.
- Returns
self – Estimator instance.
- Return type
estimator instance