- Singapore
-
17:49
(UTC +08:00) - https://int-lyc.gitbook.io/intlyc/
- https://orcid.org/0000-0001-5565-6275
Highlights
- Pro
Stars
Large Language Model Evolutionary Algorithm
This repo contains the source code for the paper "Evolution Strategies at Scale: LLM Fine-Tuning Beyond Reinforcement Learning"
Codebase for Evolutionary Reinforcement Learning (ERL) from the paper "Evolution-Guided Policy Gradients in Reinforcement Learning" published at NeurIPS 2018
This is the official implementation of ERL-Re2.
ShinkaEvolve: Towards Open-Ended and Sample-Efficient Program Evolution
A collection of LLMs for optimization, including modeling and solving
An open-source library for GPU-accelerated robot learning and sim-to-real transfer.
LLM4AD: A Platform for Algorithm Design with Large Language Model
Neuroevolution is a Competitive Alternative to Reinforcement Learning for Skill Discovery
A comprehensive collection of KAN(Kolmogorov-Arnold Network)-related resources, including libraries, projects, tutorials, papers, and more, for researchers and developers in the Kolmogorov-Arnold N…
Massively parallel rigidbody physics simulation on accelerator hardware.
A large-scale benchmark for co-optimizing the design and control of soft robots, as seen in NeurIPS 2021.
The AI Scientist-v2: Workshop-Level Automated Scientific Discovery via Agentic Tree Search
EvoMO is a GPU-accelerated library for evolutionary multiobjective optimization (EMO)
Discovering Quality-Diversity Algorithms via Meta-Black-Box Optimization
Codes of CTPG accompanying the paper "Efficient Multi-Task Reinforcement Learning with Cross-Task Policy Guidance"(NeurIPS 2024).
(ICML 2024) The official code for Value-Evolutionary-Based Reinforcement Learning
(ICML 2024) The official code for EvoRainbow: Combining Improvements in Evolutionary Reinforcement Learning for Policy Search
Accelerated Quality-Diversity
Explainable benchmarking using XAI
Model-Based Transfer Learning for Contextual Reinforcement Learning (NeurIPS 2024)
Official repository of Evolutionary Optimization of Model Merging Recipes
MetaDE is a GPU-accelerated evolutionary framework that optimizes Differential Evolution (DE) strategies via meta-level evolution. Supporting both JAX and PyTorch, it dynamically adapts mutation an…
EvoRL is a fully GPU-accelerated framework for Evolutionary Reinforcement Learning, implemented with JAX. It supports Reinforcement Learning (RL), Evolutionary Computation (EC), Evolution-guided Re…
