Skip to content

Nemo curator laion#31

Draft
avigyabb wants to merge 38 commits intoanyscale:mainfrom
avigyabb:nemo-curator-laion
Draft

Nemo curator laion#31
avigyabb wants to merge 38 commits intoanyscale:mainfrom
avigyabb:nemo-curator-laion

Conversation

@avigyabb
Copy link
Contributor

No description provided.

akshay-anyscale and others added 30 commits July 3, 2025 14:42
Small fix for querying service
Add example for serving Llama 3 8B
Use L4 instead of A10G for LLM serving example
Can't actually use `pathlib.Path` because that's only designed for file
paths not URLs :/
But `urllib.parse.urljoin` works and is a bit prettier (it's a standard
python library)

---------

Signed-off-by: Aydin Abiar <aydin@anyscale.com>
Co-authored-by: Aydin Abiar <aydin@anyscale.com>
# Why this change
We previously pinned to vLLM v0 due to a bug in Ray 2.48.0 that blocked
Hugging Face tokens from runtime dependencies. This is bad because
non-maintanable (vLLM v0 will be deprecated soon). I just found out we
can pass the token directly through engine parameters, so we can now use
vLLM v1 with gated models.

# Summary

* Switched deployment from vLLM v0 → vLLM v1.
* Added a C compiler to the minimal Dockerfile since vLLM v1 depends on
Triton, which compiles C code at runtime ( see
vllm-project/vllm#2997 ).
* Resolved Hugging Face token issue by passing it directly via engine
parameters instead of runtime dependencies.
* Updated model to
[meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct)
(more popular for most usage).

---

# Testing

* Tested Anyscale Service on AWS cloud.

---------

Signed-off-by: Aydin Abiar <aydin@anyscale.com>
Co-authored-by: Aydin Abiar <aydin@anyscale.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Robert Nishihara <robertnishihara@gmail.com>
Co-authored-by: Robert Nishihara <rkn@anyscale.com>
Multiple changes here...

**README**
Updated per Douglas’s review
([https://github.com/anyscale/docs/pull/1464](https://github.com/anyscale/docs/pull/1464)):

* Added a description at the top.
* Applied code highlighting to bash commands.
* Made `HF_TOKEN` usage clearer (explicit `export` example) and added
instructions for ungated models.
* Clarified where to set token/endpoint before querying the service.
* Added `pip install openai` requirement before querying
* Rephrased future tense and passive voice.

**serve\_llama\_3\_1\_8b.py**
With Ray 2.49.0 we can now forward the Hugging Face token to vLLM v1 via
runtime dependencies:

* Pass `HF_TOKEN` as a runtime dependency instead of engine parameters.
* Updated ungated model suggestion to use Unsloth’s Llama variant (so we
stay within the Llama family instead of switching to Qwen).

**Dockerfile**
Simplified for Ray 2.49.0:

* No need to pin `transformers==4.53.3` anymore
* Removed `uv` installation; if the goal is to build a minimal image
based on ray then let's remove uv ? This might confuse users

---------

Signed-off-by: Aydin Abiar <aydin@anyscale.com>
Co-authored-by: Aydin Abiar <aydin@anyscale.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Robert Nishihara <rkn@anyscale.com>
Add a tutorial on deploying llama 3.1 8b-instruct.

Follow the same format as the llama 3.1 8b example for consistency (see
this PR:
[https://github.com/anyscale/examples/pull/12](https://github.com/anyscale/examples/pull/12)).

As with the reasoning for choosing L4 GPUs in the Llama 3 8B example,
here we use 8×A100 GPUs, which are available in both our AWS and GCP
clouds.

---------

Signed-off-by: Aydin Abiar <aydin@anyscale.com>
Co-authored-by: Aydin Abiar <aydin@anyscale.com>
Co-authored-by: Robert Nishihara <rkn@anyscale.com>
return asyncio.run(process_batch(batch, output_dir, batch_num))


def download_webdataset(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This assumes the whole dataset fits on disk in one machine, right? (Fine for LAION since it is just URLs, but probably not in general.)

What's the best way to get data into NeMo Curator? E.g., would it make sense to use Ray Data to read the data and stream it in? Or does NeMo Curator have methods for this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since nemo curator uses nvidia DALI, I think the ideal data loading story would be to have all the images in something like s3 partitioned into different tar shards. We can then mount the s3 on each of the nodes, with each node accessing the subset of tar shards that it is computing on. Would you like me to build this into the example?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants