Skip to content

Conversation

@neuroo
Copy link
Contributor

@neuroo neuroo commented Nov 25, 2025

Link to an issue, if relevant

Issue link

Adding a new rule? Look over this PR checklist

  • The issue or PR has links, references, or examples.
  • The rule has true positive and true negative test cases in a file that matches the rule name.

If the rule is my-rule, the test file name should be my-rule.js.

True positives are marked by comments with ruleid: <my-rule> and true negatives are marked by comments with ok: <my-rule>.

  • The rule has a good message. A good message includes:
  1. A description of the pattern (e.g., missing parameter, dangerous flag, out-of-order function calls).
  2. A description of why this pattern was detected (e.g., logic bug, introduces a security vulnerability, bad practice).
  3. An alternative that resolves the issue (e.g., use another function, validate data first, discard the dangerous flag).


chat = ChatOpenAI(model="gpt-3.5-turbo-1106", temperature=0.2)
# proruleid: prompt-injection-fastapi
chat.invoke([HumanMessage(content=user_chat)])

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Semgrep identified an issue in your code:
A prompt is created and user-controlled data reaches that prompt. This can lead to prompt injection. Make sure the user inputs are properly segmented from the system's in your prompts.

Dataflow graph
flowchart LR
    classDef invis fill:white, stroke: none
    classDef default fill:#e7f5ff, color:#1c7fd6, stroke: none

    subgraph File0["<b>python/fastapi/ai/prompt-injection-fastapi/prompt-injection-fastapi.py</b>"]
        direction LR
        %% Source

        subgraph Source
            direction LR

            v0["<a href=https://github.com/semgrep/semgrep-rules/blob/c2f7953f3ffe21cf3532da7aa3dc6052d919b2cf/python/fastapi/ai/prompt-injection-fastapi/prompt-injection-fastapi.py#L13 target=_blank style='text-decoration:none; color:#1c7fd6'>[Line: 13] user_name</a>"]
        end
        %% Intermediate

        subgraph Traces0[Traces]
            direction TB

            v2["<a href=https://github.com/semgrep/semgrep-rules/blob/c2f7953f3ffe21cf3532da7aa3dc6052d919b2cf/python/fastapi/ai/prompt-injection-fastapi/prompt-injection-fastapi.py#L13 target=_blank style='text-decoration:none; color:#1c7fd6'>[Line: 13] user_name</a>"]

            v3["<a href=https://github.com/semgrep/semgrep-rules/blob/c2f7953f3ffe21cf3532da7aa3dc6052d919b2cf/python/fastapi/ai/prompt-injection-fastapi/prompt-injection-fastapi.py#L15 target=_blank style='text-decoration:none; color:#1c7fd6'>[Line: 15] user_chat</a>"]
        end
            v2 --> v3
        %% Sink

        subgraph Sink
            direction LR

            v1["<a href=https://github.com/semgrep/semgrep-rules/blob/c2f7953f3ffe21cf3532da7aa3dc6052d919b2cf/python/fastapi/ai/prompt-injection-fastapi/prompt-injection-fastapi.py#L45 target=_blank style='text-decoration:none; color:#1c7fd6'>[Line: 45] user_chat</a>"]
        end
    end
    %% Class Assignment
    Source:::invis
    Sink:::invis

    Traces0:::invis
    File0:::invis

    %% Connections

    Source --> Traces0
    Traces0 --> Sink


Loading

To resolve this comment:

✨ Commit Assistant Fix Suggestion
  1. Avoid passing untrusted user input directly into LLM prompts. Instead, validate and sanitize the user_name parameter before using it in your prompt.
  2. Use input validation to restrict user_name to a safe character set, such as alphanumerics and basic punctuation, using a function like:
    import re
    def sanitize_username(name): return re.sub(r'[^a-zA-Z0-9_\- ]', '', name)
    Then, use sanitized_user_name = sanitize_username(user_name).
  3. Replace usages of user_chat = f"ints are safe {user_name}" with user_chat = f"ints are safe {sanitized_user_name}".
  4. For all calls to LLM APIs (OpenAI, HuggingFace, ChatOpenAI, etc.), ensure only sanitized or trusted data is used when building prompt or messages content.

Alternatively, if you want to reject invalid usernames altogether, raise an error if the input doesn't match your allowed pattern.

Input sanitization reduces the risk of prompt injection by removing unexpected control characters or instructions that a malicious user could provide.

💬 Ignore this finding

Reply with Semgrep commands to ignore this finding.

  • /fp <comment> for false positive
  • /ar <comment> for acceptable risk
  • /other <comment> for all other reasons

Alternatively, triage in Semgrep AppSec Platform to ignore the finding created by prompt-injection-fastapi.

You can view more details about this finding in the Semgrep AppSec Platform.

Comment on lines +20 to +23
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": user_chat},
],

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Semgrep identified an issue in your code:
A prompt is created and user-controlled data reaches that prompt. This can lead to prompt injection. Make sure the user inputs are properly segmented from the system's in your prompts.

Dataflow graph
flowchart LR
    classDef invis fill:white, stroke: none
    classDef default fill:#e7f5ff, color:#1c7fd6, stroke: none

    subgraph File0["<b>python/fastapi/ai/prompt-injection-fastapi/prompt-injection-fastapi.py</b>"]
        direction LR
        %% Source

        subgraph Source
            direction LR

            v0["<a href=https://github.com/semgrep/semgrep-rules/blob/c2f7953f3ffe21cf3532da7aa3dc6052d919b2cf/python/fastapi/ai/prompt-injection-fastapi/prompt-injection-fastapi.py#L13 target=_blank style='text-decoration:none; color:#1c7fd6'>[Line: 13] user_name</a>"]
        end
        %% Intermediate

        subgraph Traces0[Traces]
            direction TB

            v2["<a href=https://github.com/semgrep/semgrep-rules/blob/c2f7953f3ffe21cf3532da7aa3dc6052d919b2cf/python/fastapi/ai/prompt-injection-fastapi/prompt-injection-fastapi.py#L13 target=_blank style='text-decoration:none; color:#1c7fd6'>[Line: 13] user_name</a>"]

            v3["<a href=https://github.com/semgrep/semgrep-rules/blob/c2f7953f3ffe21cf3532da7aa3dc6052d919b2cf/python/fastapi/ai/prompt-injection-fastapi/prompt-injection-fastapi.py#L15 target=_blank style='text-decoration:none; color:#1c7fd6'>[Line: 15] user_chat</a>"]
        end
            v2 --> v3
        %% Sink

        subgraph Sink
            direction LR

            v1["<a href=https://github.com/semgrep/semgrep-rules/blob/c2f7953f3ffe21cf3532da7aa3dc6052d919b2cf/python/fastapi/ai/prompt-injection-fastapi/prompt-injection-fastapi.py#L20 target=_blank style='text-decoration:none; color:#1c7fd6'>[Line: 20] [<br>            {&quot;role&quot;: &quot;system&quot;, &quot;content&quot;: &quot;You are a helpful assistant.&quot;},<br>            {&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: user_chat},<br>        ]</a>"]
        end
    end
    %% Class Assignment
    Source:::invis
    Sink:::invis

    Traces0:::invis
    File0:::invis

    %% Connections

    Source --> Traces0
    Traces0 --> Sink


Loading

To resolve this comment:

✨ Commit Assistant Fix Suggestion
  1. Never insert user-controlled values directly into prompts. For the OpenAI and HuggingFace calls, replace user_chat = f"ints are safe {user_name}" with code that validates or sanitizes user_name.
  2. If you expect user_name to be a plain name, restrict to allowed characters using a regex or manual check. Example: import re and use if not re.match(r"^[a-zA-Z0-9_ -]{1,32}$", user_name): raise ValueError("Invalid user name").
  3. Alternatively, if the input could contain dangerous characters, escape or neutralize control characters before using it in prompts. Example: user_name = user_name.replace("{", "").replace("}", "").
  4. After validation/sanitization, use the safe value when building the prompt: user_chat = f"ints are safe {user_name}".
  5. Use the sanitized user_chat for all calls instead of the raw one. For example, in your OpenAI and HuggingFace requests, replace the user message content parameter with the sanitized version.
  6. Avoid allowing users to inject prompt instructions (like "\nSystem: ..." or similar) by keeping formatting simple and validated.
    Only allow trusted or validated input to reach the LLM prompt, since prompt injection can result in loss of control over the model's outputs or leakage of system information.
💬 Ignore this finding

Reply with Semgrep commands to ignore this finding.

  • /fp <comment> for false positive
  • /ar <comment> for acceptable risk
  • /other <comment> for all other reasons

Alternatively, triage in Semgrep AppSec Platform to ignore the finding created by prompt-injection-fastapi.

You can view more details about this finding in the Semgrep AppSec Platform.


huggingface = InferenceClient()
# proruleid: prompt-injection-fastapi
res = huggingface.text_generation(user_chat, stream=True, details=True)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Semgrep identified an issue in your code:
A prompt is created and user-controlled data reaches that prompt. This can lead to prompt injection. Make sure the user inputs are properly segmented from the system's in your prompts.

Dataflow graph
flowchart LR
    classDef invis fill:white, stroke: none
    classDef default fill:#e7f5ff, color:#1c7fd6, stroke: none

    subgraph File0["<b>python/fastapi/ai/prompt-injection-fastapi/prompt-injection-fastapi.py</b>"]
        direction LR
        %% Source

        subgraph Source
            direction LR

            v0["<a href=https://github.com/semgrep/semgrep-rules/blob/c2f7953f3ffe21cf3532da7aa3dc6052d919b2cf/python/fastapi/ai/prompt-injection-fastapi/prompt-injection-fastapi.py#L13 target=_blank style='text-decoration:none; color:#1c7fd6'>[Line: 13] user_name</a>"]
        end
        %% Intermediate

        subgraph Traces0[Traces]
            direction TB

            v2["<a href=https://github.com/semgrep/semgrep-rules/blob/c2f7953f3ffe21cf3532da7aa3dc6052d919b2cf/python/fastapi/ai/prompt-injection-fastapi/prompt-injection-fastapi.py#L13 target=_blank style='text-decoration:none; color:#1c7fd6'>[Line: 13] user_name</a>"]

            v3["<a href=https://github.com/semgrep/semgrep-rules/blob/c2f7953f3ffe21cf3532da7aa3dc6052d919b2cf/python/fastapi/ai/prompt-injection-fastapi/prompt-injection-fastapi.py#L15 target=_blank style='text-decoration:none; color:#1c7fd6'>[Line: 15] user_chat</a>"]
        end
            v2 --> v3
        %% Sink

        subgraph Sink
            direction LR

            v1["<a href=https://github.com/semgrep/semgrep-rules/blob/c2f7953f3ffe21cf3532da7aa3dc6052d919b2cf/python/fastapi/ai/prompt-injection-fastapi/prompt-injection-fastapi.py#L37 target=_blank style='text-decoration:none; color:#1c7fd6'>[Line: 37] user_chat</a>"]
        end
    end
    %% Class Assignment
    Source:::invis
    Sink:::invis

    Traces0:::invis
    File0:::invis

    %% Connections

    Source --> Traces0
    Traces0 --> Sink


Loading

To resolve this comment:

✨ Commit Assistant Fix Suggestion
  1. Validate or sanitize the user_name input before using it to build prompts. For example, allow only a limited set of safe characters (such as alphanumerics and a few accepted symbols) using a regular expression: import re and then if not re.fullmatch(r"[a-zA-Z0-9_\- ]{1,64}", user_name): raise ValueError("Invalid user name").
  2. Alternatively, if you cannot strictly limit allowed characters, escape or segment user input clearly in prompts so it's obvious to the language model which parts are from the user, such as: {"role": "user", "content": f"USER_INPUT_START {user_name} USER_INPUT_END"}.
  3. Update all instances where user_chat = f"ints are safe {user_name}" to use the validated and/or clearly segmented version of user_name in the prompt.
  4. Use the sanitized/escaped input when calling language model APIs, for example: res = huggingface.text_generation(safe_user_chat, ...) and only insert trusted or sanitized data in the prompt contents.

Prompt injection is possible when user-controlled input is included in the prompt for an LLM without validation, escaping, or clear segmentation, allowing users to "break out" of the intended structure. Input validation reduces the risk of unexpected prompt alteration.

💬 Ignore this finding

Reply with Semgrep commands to ignore this finding.

  • /fp <comment> for false positive
  • /ar <comment> for acceptable risk
  • /other <comment> for all other reasons

Alternatively, triage in Semgrep AppSec Platform to ignore the finding created by prompt-injection-fastapi.

You can view more details about this finding in the Semgrep AppSec Platform.


huggingface = InferenceClient()
# proruleid: prompt-injection-fastapi
res = huggingface.text_generation(user_chat, stream=True, details=True)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Semgrep identified an issue in your code:
A prompt is created and user-controlled data reaches that prompt. This can lead to prompt injection. Make sure the user inputs are properly segmented from the system's in your prompts.

Dataflow graph
flowchart LR
    classDef invis fill:white, stroke: none
    classDef default fill:#e7f5ff, color:#1c7fd6, stroke: none

    subgraph File0["<b>python/fastapi/ai/prompt-injection-fastapi/prompt-injection-fastapi.py</b>"]
        direction LR
        %% Source

        subgraph Source
            direction LR

            v0["<a href=https://github.com/semgrep/semgrep-rules/blob/c2f7953f3ffe21cf3532da7aa3dc6052d919b2cf/python/fastapi/ai/prompt-injection-fastapi/prompt-injection-fastapi.py#L13 target=_blank style='text-decoration:none; color:#1c7fd6'>[Line: 13] user_name</a>"]
        end
        %% Intermediate

        subgraph Traces0[Traces]
            direction TB

            v2["<a href=https://github.com/semgrep/semgrep-rules/blob/c2f7953f3ffe21cf3532da7aa3dc6052d919b2cf/python/fastapi/ai/prompt-injection-fastapi/prompt-injection-fastapi.py#L13 target=_blank style='text-decoration:none; color:#1c7fd6'>[Line: 13] user_name</a>"]

            v3["<a href=https://github.com/semgrep/semgrep-rules/blob/c2f7953f3ffe21cf3532da7aa3dc6052d919b2cf/python/fastapi/ai/prompt-injection-fastapi/prompt-injection-fastapi.py#L15 target=_blank style='text-decoration:none; color:#1c7fd6'>[Line: 15] user_chat</a>"]
        end
            v2 --> v3
        %% Sink

        subgraph Sink
            direction LR

            v1["<a href=https://github.com/semgrep/semgrep-rules/blob/c2f7953f3ffe21cf3532da7aa3dc6052d919b2cf/python/fastapi/ai/prompt-injection-fastapi/prompt-injection-fastapi.py#L41 target=_blank style='text-decoration:none; color:#1c7fd6'>[Line: 41] user_chat</a>"]
        end
    end
    %% Class Assignment
    Source:::invis
    Sink:::invis

    Traces0:::invis
    File0:::invis

    %% Connections

    Source --> Traces0
    Traces0 --> Sink


Loading

To resolve this comment:

✨ Commit Assistant Fix Suggestion
  1. Avoid passing direct user input like user_name to LLMs or text generation APIs, as this allows prompt injection attacks.
  2. If you must use user_name, strictly validate and escape it before use:
    • Allow only safe characters: import re then user_name = re.sub(r'[^a-zA-Z0-9_ -]', '', user_name)
    • Alternatively, if you expect a specific format (such as usernames), use a stricter regex: ^[a-zA-Z0-9_-]+$
  3. If the LLM prompt must reference the username, clearly segment user data in the prompt. For example: user_chat = f"ints are safe. User name (not command): {user_name}"
  4. When calling APIs like huggingface.text_generation or passing messages to LLMs, use the sanitized and segmented value instead of the raw input. For example, replace huggingface.text_generation(user_chat, ...) with your sanitized and segmented prompt.
  5. Prefer only including trusted or controlled data where possible, and consider dropping user-controlled input from system prompts if not strictly required.

Using strong input validation and separating user input contextually in prompts helps prevent attackers from injecting harmful instructions into LLM queries.

💬 Ignore this finding

Reply with Semgrep commands to ignore this finding.

  • /fp <comment> for false positive
  • /ar <comment> for acceptable risk
  • /other <comment> for all other reasons

Alternatively, triage in Semgrep AppSec Platform to ignore the finding created by prompt-injection-fastapi.

You can view more details about this finding in the Semgrep AppSec Platform.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants