-
Notifications
You must be signed in to change notification settings - Fork 51
Add policy on LLM contributions based on Zulip's #932
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
TimMonko
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @jni! I'm glad we finally have the ball rolling on this. I think you are correct that we were trying to start by doing too much at once, and this creates a great foundation that we've agreed on.
For those unable to view the private Zulip chat, we discussed multiple different LLM policies from other large and small open source projects and agreed that Zulip captured our goal with an LLM policy the most.
| trust claims they make about how napari works. LLMs are often wrong, | ||
| even about details that are clearly answered in the napari | ||
| documentation. | ||
| 2. Try to submit your changes in small, self-contained pull requests, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reading through differences from Zulips.
This is Zulip's:
Split up your changes into coherent commits, even if an LLM generates them all in one go.
I think that Juan's change is likely more fitting for our project, since we expect very little commit discipline
| write good code, including AI tools. However, as noted above, you | ||
| always need to understand and explain the changes you're proposing to | ||
| make, whether or not you used an LLM as part of your process to | ||
| produce them. The answer to "Why is X an improvement?" should never be |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| write good code, including AI tools. However, as noted above, you | |
| always need to understand and explain the changes you're proposing to | |
| make, whether or not you used an LLM as part of your process to | |
| produce them. The answer to "Why is X an improvement?" should never be | |
| write good code, including AI tools. However, as noted above, you | |
| always need to make a sincere effort to understand and explain the changes you're proposing, | |
| whether or not you used an LLM as part of your process to | |
| produce them. The answer to "Why is X an improvement?" should never be |
My only thought here is that I would have likely been a bit turned off by this. Not because I didn't understand the changes I was submitting, but because I didn't think it was possible for me to understanding everything without help. In other words, I was always afraid I was going to be wrong, and that made me hesistant to submit PRs, even if I did my very absolute best to understand things. (I do believe I was asking like chatgpt or something back a year ago when I was starting)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Tim! I like the suggestion. (Will apply locally to have consistent line wrap.)
Tim's excellent justification: > My only thought here is that I would have likely been a bit turned off > by this. Not because I didn't understand the changes I was submitting, > but because I *didn't think it was possible for me to understand > everything without help*. In other words, I was always afraid I was going > to be wrong, and that made me hesistant to submit PRs, even if I did my > very absolute best to understand things. (I do believe I was asking like > chatgpt or something back a year ago when I was starting)
docs/developers/contributing/ai.md
Outdated
| **Do not submit an AI-generated PR you haven't personally understood and | ||
| tested**, as this wastes maintainers' time. PRs that appear to violate this | ||
| guideline will be closed without review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest going a step further:
| **Do not submit an AI-generated PR you haven't personally understood and | |
| tested**, as this wastes maintainers' time. PRs that appear to violate this | |
| guideline will be closed without review. | |
| 1. **Do not allow AI agents to submit PRs for you.** Please submit any PRs yourself | |
| using the PR template. | |
| 2. **Do not submit an AI-generated PR you haven't personally understood and | |
| tested.** Doing this wastes maintainers' time. | |
| PRs that appear to violate these guidelines will be closed without review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this! It forces that 1 extra reading step.
docs/developers/contributing/ai.md
Outdated
| useful search engine/discovery tool in this process, but don't | ||
| trust claims they make about how napari works. LLMs are often wrong, | ||
| even about details that are clearly answered in the napari | ||
| documentation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
meta 😉
| documentation. | |
| documentation. When in doubt, please reach out on [our Zulip chat](https://napari.zulipchat.com). |
| intention, to avoid wasting maintainer time with long, sloppy | ||
| writing. We strongly prefer clear and concise communication about | ||
| points that actually require discussion over long AI-generated | ||
| comments. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| comments. | |
| comments. | |
| That said, feel free to use AI tools to proof read, correct, or improve your own English writing; | |
| we know it's not everyones native language and large language models (LLMs) | |
| can be very powerful and helpful in this domain. |
docs/developers/contributing/ai.md
Outdated
| points that actually require discussion over long AI-generated | ||
| comments. | ||
|
|
||
| When you use an LLM to write a message for you, it remains **your |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| When you use an LLM to write a message for you, it remains **your | |
| If you use an LLM to write a message for you, it remains **your |
docs/developers/contributing/ai.md
Outdated
| 4. Complete all parts of the **PR description template**, including screenshots | ||
| and the submission checklist. Don't simply overwrite the template with LLM |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we require screenshots do we?
I would consider removing this point, given my suggested changes in the warning box above.
|
I made some comments, mostly minor, but I do think it's worth -- at least for the time being? -- drawing the line on totally automated/AI PRs -- see also the related matplotlib matplotlib/matplotlib#31026 Putting that in the warning box and being able to explicitly reject based on that may make our lives easier. |
|
the failed cis are because the new file isn't in toc: |
|
@psobolewskiPhD Thanks for the review! I've incorporated nearly all of your suggestions, some with minor modifications. The exception is the language encouraging LLM use for English, which I'm at best ambivalent about. We can discuss it, but let's do so in a follow-up so this goes in ASAP! |
As discussed in the core team Zulip channel.
We can iterate on this but we should get an MVP in soon (ideally in time for
0.7.0) and then discuss finer points of the policy and continue to make changes
as both we and the space evolve.