-
Notifications
You must be signed in to change notification settings - Fork 3.3k
feat: Add DeepSeek-OCR integration #2721
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat: Add DeepSeek-OCR integration #2721
Conversation
|
✅ DCO Check Passed Thanks @rafaeltuelho, all your commits are properly signed off. 🎉 |
Merge ProtectionsYour pull request matches the following merge protections and will not be merged until they are valid. 🟢 Enforce conventional commitWonderful, this rule succeeded.Make sure that we follow https://www.conventionalcommits.org/en/v1.0.0/
|
74b3bcd to
076c3ad
Compare
Codecov Report❌ Patch coverage is
📢 Thoughts on this report? Let us know! |
|
It seems the reason for the lack of coverage in I had to use Google Colab (T4) to test it manually. What is the recommended approach here? |
- Add DeepSeekOcrModel with automatic device detection (CUDA → MPS → Error) - Add DeepSeekOcrOptions for configuring the OCR engine - Support CUDA with bfloat16 and flash_attention_2 (optimal performance) - Support MPS (Apple Silicon) with float16 and eager attention (requires PyTorch 2.7.0+) - Auto-switch to MPS-compatible model (Dogacel/DeepSeek-OCR-Metal-MPS) on Apple Silicon - Add clear error messages for unsupported configurations - Add mock-based unit tests for CI coverage without GPU hardware - Update E2E tests with DOCLING_TEST_DEEPSEECOCR environment variable guard Note: MPS support requires PyTorch 2.7.0+ for aten::_upsample_bicubic2d_aa operator. See: https://github.com/Dogacel/DeepSeek-OCR-Metal-MPS/discussions Signed-off-by: Rafael T. C. Soares <[email protected]>
076c3ad to
4e93020
Compare
|
@rafaeltuelho Thanks for the starting the contribution. It is definitely something which was on our radar as well. The key question we would like to assess is if this model should be exposed as OCR engine or as model in the VLM pipeline. |
|
@rafaeltuelho not sure if that aligns with what @dolfim-ibm refers to: as a user it would be amazing to be able to integrate deepseek ocr as an external service, i.e., via api calls, instead of as a local model as part of the regular pipeline |
@simonschoe Untested, but I think you could already use DeepSeekOCR with the markdown prompt for the VLM API Docling settings. https://docling-project.github.io/docling/examples/vlm_pipeline_api_model/ |
@dolfim-ibm That's a good question. I remember I have only used/tested the VLM for Picture description. Is it possible to run the VLM pipeline for OCR-based conversion? I tried to follow the same approach used by other OCR (eg: EasyOcr) already supported in Docling |
| >>> options = DeepSeekOcrOptions(prompt="<image>\\nConvert to markdown.") | ||
| """ | ||
|
|
||
| kind: ClassVar[Literal["deepseecocr"]] = "deepseecocr" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| kind: ClassVar[Literal["deepseecocr"]] = "deepseecocr" | |
| kind: ClassVar[Literal["deepseekocr"]] = "deepseekocr" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please also adapt all the other occurrences of deepseecocr
| ) | ||
| self.options: DeepSeekOcrOptions | ||
|
|
||
| self.scale = 3 # multiplier for 72 dpi == 216 dpi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this "simply" copied from the other OCR models or is it the preferred value for DeepSeek-OCR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is being used in other OCRs as well (EasyOCR and Tesseract)
| 'transformers (>=4.46.0,<5.0.0)', | ||
| 'torch (>=2.0.0)', | ||
| 'einops', | ||
| 'Pillow (>=10.0.0)', | ||
| 'addict', | ||
| 'easydict', | ||
| 'matplotlib', | ||
| ] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removing what seems not to be used/needed
| 'transformers (>=4.46.0,<5.0.0)', | |
| 'torch (>=2.0.0)', | |
| 'einops', | |
| 'Pillow (>=10.0.0)', | |
| 'addict', | |
| 'easydict', | |
| 'matplotlib', | |
| ] | |
| 'transformers (>=4.46.0,<5.0.0)', | |
| 'torch (>=2.0.0)', | |
| 'Pillow (>=10.0.0)', | |
| ] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In my tests, DeepSeek-OCR required addict matplotlib easydict to be present in order to process the parser...
|
|
||
| # DeepSeek OCR - requires GPU (CUDA or MPS) and transformers | ||
| # Only run if explicitly enabled via environment variable | ||
| # Set DOCLING_TEST_DEEPSEECOCR=true to include DeepSeek-OCR tests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
instead of the ENV, could we detect if deepseek-ocr is runnable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean by runnable?
| _log.error( | ||
| "DeepSeek-OCR MPS model incompatibility detected!\n\n" | ||
| "The MPS-compatible model 'Dogacel/DeepSeek-OCR-Metal-MPS' uses deprecated " | ||
| "transformers APIs (DynamicCache.seen_tokens) that are not compatible with " | ||
| "your current transformers version.\n\n" | ||
| "This is a known issue with the community-maintained MPS fork.\n" | ||
| "See: https://github.com/Dogacel/DeepSeek-OCR-Metal-MPS/issues\n\n" | ||
| "Workarounds:\n" | ||
| " 1. Use a different OCR engine that supports MPS:\n" | ||
| " - EasyOcrOptions(lang=['en'])\n" | ||
| " - RapidOcrOptions()\n" | ||
| " 2. Wait for the MPS model to be updated for newer transformers versions\n" | ||
| " 3. Test in an isolated environment with transformers==4.43.4 (not recommended)\n" | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this issue is solved. I updated the upstream to support latest transformers.
Note:
Resolves #2497
Checklist: