Skip to content

Commit e2ce4c6

Browse files
committed
more cleanup 3
1 parent afd9889 commit e2ce4c6

File tree

1 file changed

+11
-3
lines changed

1 file changed

+11
-3
lines changed

index.md

Lines changed: 11 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,12 +5,15 @@ permalink: /
55
---
66

77
# Introduction
8-
This is a website dedicated to the benchmarking of optimization algorithms.
9-
It was the main website of the {{ site.projectNameStyled }} framework, a `Java 1.7` software designed to make the evaluation, benchmarking, and comparison of [optimization](http://en.wikipedia.org/wiki/Mathematical_optimization) or [Machine Learning](http://en.wikipedia.org/wiki/Machine_learning) algorithms easier.
8+
This is a website dedicated to the [benchmarking](https://thomasweise.github.io/research/areas/benchmarking) of optimization algorithms.
9+
It was the main website of the {{ site.projectNameStyled }} framework, a `Java 1.7` software from the mid-2010s to 2020s that was designed to make the evaluation, benchmarking, and comparison of [optimization](http://en.wikipedia.org/wiki/Mathematical_optimization) or [Machine Learning](http://en.wikipedia.org/wiki/Machine_learning) algorithms easier.
1010
This software was developed during a research project at Hefei Universit in Hefei, Anhui, China until about 2020.
1111
*It is no longer under active development.*
1212

13-
Still, it's source code is still available.
13+
A better framework for the implementation of and experimentation with optimization algorithms in Java is [aitoa-code](https://thomasweise.github.io/aitoa-code), which, by now, is also no longer in active development.
14+
Under active development and wide use is [moptipy](https://thomasweise.github.io/moptipy), which does the same in [Python](https://thomasweise.github.io/programmingWithPython).
15+
16+
The optimizationBenchmarking source code is still available, though.
1417
This software can load log files created by (experiments with) an optimization or Machine Learning algorithm implementation, evaluate how the implementation has progressed over time, and compare its performance to other algorithms (or implementations) -- over several different benchmark cases.
1518
It can create reports in [LaTeX](http://en.wikipedia.org/wiki/LaTeX) (ready for publication) or [XHTML](http://en.wikipedia.org/wiki/XHTML) formats or export its findings in text files which may later be loaded by other applications. It does not make any requirements regarding the implementation of the algorithms under investigation (also not regarding the programming language) and does not require any programming for your side!
1619
It has a convenient GUI.
@@ -26,3 +29,8 @@ The {{ site.projectNameStyled }} framework prescribes the following work flow:
2629
3. *Experiments:* Well, run your algorithm, i.e., apply it a few times to each benchmark instance. You get the log files. Actually, you may want to do this several times with different parameter settings of your algorithm. Or maybe for different algorithms, so you have comparison data.
2730
4. *Use Evaluator:* Now, you can use our evaluator component to find our how good your method works! For this, you can define the *dimensions* you have measured (such as runtime and solution quality), the features of your benchmark instances (such as number of cities in a Traveling Salesman Problem or the scale and symmetry of a numerical problem), the parameter settings of your algorithm (such as population size of an EA), the information you want to get (ECDF? performance over time?), and how you want to get it (LaTeX, optimized for IEEE Transactions, ACM, or Springer LNCS? or maybe XHTML for the web?). Our evaluator will create the report with the desired information in the desired format.
2831
5. By interpreting the report and advanced statistics presented to you, you can get a deeper insight into your algorithm's performance as well as into the features and hardness of the benchmark instances you used. You can also directly use building blocks from the generated reports in your publications.
32+
33+
## Posts
34+
% for post in site.posts %}
35+
- [{{ post.title }}]({{ site.baseurl }}/{{ post.url }}), {{ post.date -}}
36+
{% endfor %}

0 commit comments

Comments
 (0)