Skip to content
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Next Next commit
add release log part 1
  • Loading branch information
weixuanfu2016 committed Jun 1, 2017
commit 39a4d58e80a3aca2294e7b88e522c3462ee7ad95
21 changes: 19 additions & 2 deletions docs_sources/releases.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,22 @@
# Version 0.8

* TPOT now detects whether there is missing data in the provided data set, and if so, add an evolvable imputer to the primitive set.

* TPOT now aloows you to set a group parameter in fit function in order to add Group labels for the samples used while splitting the dataset into train/test set.

* TPOT now allows you to set a subsample ratio of the training instance by `subsample` parameter. For example, setting it to 0.5 means that TPOT randomly collects half of training samples for pipeline optimization process.

* TPOR now has more [built-in-tpot-configurations](/using/#built-in-tpot-configurations), including TPOT MDR and TPOT light. And TPOT MDR now supports both classification and regression.

* TPOTClassifier/TPOTRegressor now provides three useful internal attributes, `_fitted_pipeline`, `_pareto_front_fitted_pipelines`, and `_evaluated_individuals`.

* Fixed a reproducibility issue where setting `random_seed` didn't necessarily result in the same results every time. This bug was present since TPOT v0.7

* Refined input checking in TPOT

* Removed Python2 uncompliant codes


# Version 0.7

* **TPOT now has multiprocessing support.** TPOT allows you to use multiple processes in parallel to accelerate the pipeline optimization process in TPOT with the `n_jobs` parameter.
Expand All @@ -8,8 +27,6 @@

* We tweaked TPOT's underlying evolutionary optimization algorithm to work even better, including using the [mu+lambda algorithm](http://deap.readthedocs.io/en/master/api/algo.html#deap.algorithms.eaMuPlusLambda). This algorithm gives you more control of how many pipelines are generated every iteration with the `offspring_size` parameter.

* Fixed a reproducibility issue where setting `random_seed` didn't necessarily result in the same results every time. This bug was present since TPOT v0.6.

* Refined the default operators and parameters in TPOT, so TPOT 0.7 should work even better than 0.6.

* TPOT now supports sample weights in the fitness function if some if your samples are more important to classify correctly than others. The sample weights option works the same as in scikit-learn, e.g., `tpot.fit(x_train, y_train, sample_weights=sample_weights)`.
Expand Down