You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<p>Developed at the <ahref="http://iao.hfuu.edu.cn">Institute of Applied Optimization</a> of the Faculty of Computer Science and Technology of the <ahref="http://www.hfuu.edu.cn">Hefei University</a> and the University of Science and Technology of China (<ahref="http://www.ustc.edu.cn/">USTC</a>). Supported by NSFC Project 61673359.</p>
28
+
</ul>
35
29
<p>This page was generated by <ahref="https://pages.github.com">GitHub Pages</a> using a modified version of the Architect theme by <ahref="https://github.com/jasonlong">Jason Long</a>.</p>
Copy file name to clipboardExpand all lines: downloads.md
+4-33Lines changed: 4 additions & 33 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,34 +6,13 @@ categories: default
6
6
7
7
<h1>Downloads & Installation</h1>
8
8
9
-
For using the software, you may first <ahref="#down">download</a> it. The software requires a set of other tools, which you then should <ahref="#inst">install</a> in order to get everything to work smoothly. Alternatively, if you are using Linux, MacOS, or Windows (on a 64 bit system), you can also use the <ahref="#docker">dockerized</a> version of the GUI. The latter is recommended, as it only requires you to install Docker and then you are ready to go!
10
-
11
-
<h2id="docker">Dockerized Version: Only Install Docker, Nothing Else!</h2>
12
-
13
-
The graphical user interface ([GUI](https://github.com/optimizationBenchmarking/evaluator-gui)) of our {{ site.projectNameStyled }} [evaluator](https://github.com/optimizationBenchmarking/evaluator-evaluator) has <ahref="{ site.baseurl }}/page/2016/05/16/dockerized.html">now</a> been "[dockerized](https://hub.docker.com/r/optimizationbenchmarking/evaluator-gui/)". [Docker](http://www.docker.com/) is an application that allows you to define, publish, and run containers. Containers are something like lightweight VMs, they live as normal processes on the same kernel as the OS under Linux and as small Virtual Box VMs under Windows and Mac OS. Docker can be installed following the guidelines below:
14
-
15
-
* for [Linux](https://docs.docker.com/linux/step_one/), you can run `curl -fsSL https://get.docker.com/ | sh` on your command line and everything is done automatically (if you have `curl` installed, which is normally the case),
16
-
* for [Windows](https://docs.docker.com/windows/step_one/)
17
-
* for [Mac OS](https://docs.docker.com/mac/step_one/)
18
-
19
-
After doing this, you can start our container by typing the following command into a normal terminal (Linux), the *Docker Quickstart Terminal* (Mac OS), or the *Docker Toolbox Terminal* (Windows):
20
-
21
-
docker run -t -d -p 9999:8080/tcp optimizationbenchmarking/evaluator-gui
22
-
23
-
The first time you run the program, this will download the software once (and only once). Once the container is started, you can access it with your browser at address
24
-
25
-
-[http://localhost:9999](http://localhost:9999) under Linux or
26
-
-`http://<dockerIP>:9999` under Windows and Mac OS, where `dockerIP` is the IP address of your Docker container. This address is displayed when you run the container. You can also obtain it with the command `docker-machine ip default`.
27
-
28
-
The container contains a full installation of my system, including the [`Java 8 OpenJDK`](http://openjdk.java.net/projects/jdk8/), [`TeX Live`](http://www.tug.org/texlive/), [`R`](https://www.r-project.org/), the needed `R` packages, and [`ghostscript`](http://ghostscript.com/). No further setup is needed. It is thus about 600 MB in size.
29
-
30
-
[Here](https://hub.docker.com/r/optimizationbenchmarking/evaluator-gui/) and [here](https://github.com/optimizationBenchmarking/environments-evaluator-gui/blob/master/README.md) you can find the command line options explained. This will allow you to use our system efficiently.
9
+
For using the software, you may first <ahref="#down">download</a> it.
10
+
The software requires a set of other tools, which you then should <ahref="#inst">install</a> in order to get everything to work smoothly.
31
11
32
12
<h2id="down">Downloads of Java Software Version</h2>
33
13
34
-
This only holds if you want to run the `jar`s directly and do not use Docker. Otherwise, this step is irrelevant for you – if you use Docker, nothing else is required.
35
-
36
-
So if you want to use the `jar`s directly, maybe because you already have R and Java and TeX Live installed and want to save disk space: You may enjoy the {{ site.projectNameStyled }} software Java `jar` releases in two flavors:
14
+
This only holds if you want to run the `jar`s directly.
15
+
You may enjoy the {{ site.projectNameStyled }} software Java `jar` releases in two flavors:
There are currently two slide sets with documentation about our system:
88
-
89
-
- <ahref="https://circleci.com/api/v1/project/optimizationBenchmarking/documentation-intro-slides/latest/artifacts/0/$CIRCLE_ARTIFACTS/intro-slides.pdf?branch=master">a general introduction</a>
Copy file name to clipboardExpand all lines: index.md
+12-23Lines changed: 12 additions & 23 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,35 +5,24 @@ permalink: /
5
5
---
6
6
7
7
# Introduction
8
-
This is the main website of the {{ site.projectNameStyled }} framework, a (dockerized) `Java 1.7` software designed to make the evaluation, benchmarking, and comparison of [optimization](http://en.wikipedia.org/wiki/Mathematical_optimization) or [Machine Learning](http://en.wikipedia.org/wiki/Machine_learning) algorithms easier. This software is developed at the Institute of Applied Optimization [(IAO)](http://iao.hfuu.edu.cn) at Hefei Universit in Hefei, Anhui, China. This software can load log files created by (experiments with) an optimization or Machine Learning algorithm implementation, evaluate how the implementation has progressed over time, and compare its performance to other algorithms (or implementations) -- over several different benchmark cases. It can create reports in [LaTeX](http://en.wikipedia.org/wiki/LaTeX) (ready for publication) or [XHTML](http://en.wikipedia.org/wiki/XHTML) formats or export its findings in text files which may later be loaded by other applications. It does not make any requirements regarding the implementation of the algorithms under investigation (also not regarding the programming language) and does not require any programming for your side! It has a convenient GUI. A short set of introduction slides about this project can be found <a href="{{ site.baseurl }}/introSlides.html">here</a>.
8
+
This is a website dedicated to the benchmarking of optimization algorithms.
9
+
It was the main website of the {{ site.projectNameStyled }} framework, a `Java 1.7` software designed to make the evaluation, benchmarking, and comparison of [optimization](http://en.wikipedia.org/wiki/Mathematical_optimization) or [Machine Learning](http://en.wikipedia.org/wiki/Machine_learning) algorithms easier.
10
+
This software was developed during a research project at Hefei Universit in Hefei, Anhui, China until about 2020.
11
+
*It is no longer under active development.*
9
12
10
-
## Workshop!
11
-
We jointly organize the [International Workshop on Benchmarking of Computational Intelligence Algorithms (BOCIA)](http://iao.hfuu.edu.cn/bocia18) at the Tenth International Conference on Advanced Computational Intelligence (ICACI 2018) on March 29-31, 2018 in Xiamen, China. The paper submission deadline is November 15, 2017. Here is the CfP.
13
+
Still, it's source code is still available.
14
+
This software can load log files created by (experiments with) an optimization or Machine Learning algorithm implementation, evaluate how the implementation has progressed over time, and compare its performance to other algorithms (or implementations) -- over several different benchmark cases.
15
+
It can create reports in [LaTeX](http://en.wikipedia.org/wiki/LaTeX) (ready for publication) or [XHTML](http://en.wikipedia.org/wiki/XHTML) formats or export its findings in text files which may later be loaded by other applications. It does not make any requirements regarding the implementation of the algorithms under investigation (also not regarding the programming language) and does not require any programming for your side!
16
+
It has a convenient GUI.
12
17
13
-
## Quick Start
14
-
If you want to directly run our software and see the examples, you can use its [dockerized version](https://hub.docker.com/r/optimizationbenchmarking/evaluator-gui/). Simply perform the following steps:
15
-
16
-
1. Install [Docker](http://www.docker.com) following the instructions for [Linux](https://docs.docker.com/linux/step_one/), [Windows](https://docs.docker.com/windows/step_one/), or [MacOS](https://docs.docker.com/mac/step_one/).
17
-
2. Open a normal terminal (Linux), the *Docker Quickstart Terminal* (Mac OS), or the *Docker Toolbox Terminal* (Windows).
18
-
3. Type in <codeclass="highlighter-rouge"style="white-space:nowrap">docker run -t -i -p 9999:8080/tcp optimizationbenchmarking/evaluator-gui</code> and hit return. Only the first time you do that, it downloads our software. This may take some time, as the software is a 600 MB package. After the download, the software will start.
19
-
4. Browse to
20
-
-[`http://localhost:9999`](http://localhost:9999) under Linux
21
-
-`http://<dockerIP>:9999` under Windows and Mac OS, where `dockerIP` is the IP address of your Docker container. This address is displayed when you run the container. You can also obtain it with the command `docker-machine ip default`.
22
-
5. Enjoy the web-based GUI of our software, which looks quite similar to this web site.
But since dockerhub closed repositories of organizations, this configuration-free method of launching the software is gone.
30
20
31
21
## Workflow
32
-
The {{ site.projectNameStyled }} framework prescribes the following work flow, which is discussed
33
-
in more detail in [this set of slides]({{ site.baseurl }}/introSlides.html):
22
+
The {{ site.projectNameStyled }} framework prescribes the following work flow:
34
23
35
24
1.*Algorithm Implementation:* You implement your algorithm. Do it in a way so that you can generate log files containing rows such as (`passed runtime`, `best solution quality so far`) for each run (execution) of your algorithm. You are free to use any programming language and run it in any environment you want. We don't care about that, we just want the text files you have generated.
36
25
2.*Choose Benchmark Instances:* Choose a set of (well-known) problem instances to apply your algorithm to.
37
26
3.*Experiments:* Well, run your algorithm, i.e., apply it a few times to each benchmark instance. You get the log files. Actually, you may want to do this several times with different parameter settings of your algorithm. Or maybe for different algorithms, so you have comparison data.
38
27
4.*Use Evaluator:* Now, you can use our evaluator component to find our how good your method works! For this, you can define the *dimensions* you have measured (such as runtime and solution quality), the features of your benchmark instances (such as number of cities in a Traveling Salesman Problem or the scale and symmetry of a numerical problem), the parameter settings of your algorithm (such as population size of an EA), the information you want to get (ECDF? performance over time?), and how you want to get it (LaTeX, optimized for IEEE Transactions, ACM, or Springer LNCS? or maybe XHTML for the web?). Our evaluator will create the report with the desired information in the desired format.
39
-
5. By interpreting the report and advanced statistics presented to you, you can get a deeper insight into your algorithm's performance as well as into the features and hardness of the benchmark instances you used. You can also directly use building blocks from the generated reports in your publications.
28
+
5. By interpreting the report and advanced statistics presented to you, you can get a deeper insight into your algorithm's performance as well as into the features and hardness of the benchmark instances you used. You can also directly use building blocks from the generated reports in your publications.
0 commit comments