translate5 AI: maxToken to low for reasoning models

XMLWordPrintable

    • Critical
    • The default value for calculating max tokens for the LLM's is increased so reasoning models dont run into limits.

      Problem

      The 250% more maxToken as input tokens value is to low when the maxToken are calculated for reasoning models. We need bigger value for this field so we dont run into errors when pre-translating tasks.

            Assignee:
            Aleksandar Mitrev
            Reporter:
            Aleksandar Mitrev
            Axel Becher
            Stephan Bergmann, Sylvia Schumacher
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

              Created:
              Updated:
              Resolved: