Looks like coding platforms are not happy with the AI-written code. Codeforces, a popular online programming platform, has taken a bold stance by banning AI-generated code from its competitions.
Highlights:
- Codeforces explicitly limits the use of AI systems like GPT, Gemini, Gemma, Llama, and Claude for solving programming problems.
- Participants can’t receive ready-made code or natural language descriptions of the solution, nor they can resolve errors based on system feedback.
- The ban came after when openAI’s new o1 model was performing excellently in the case of coding.
What’s Changed in Codeforces’ rules?
Codeforces, a popular online programming platform has banned the use of AI like GPT, Gemini, and Claude in its competitions. The new rule allows for limited use like for basic code (boilerplate) or problem statements, but not for creating algorithms. This rule applies strictly to in-contest participation.
Mike Mirzayanov, the founder of Codeforces, explained this change in his latest blog post. He said that such AI technologies have now reached new heights, (especially when the latest OpenAI’s o1 is so good for coding) and it will become better and better day by day. That’s why they are limiting the use of AI-based systems for solving programming problems.
“It seems that neural networks are working technological wonders. Not long ago, they struggled with even the simplest tasks in our competitions, but now they are reaching new heights that cannot be overlooked. There is reason to believe that progress will continue, and we can expect further advancements in neural networks within the realm of programming competitions. For this reason, we are explicitly limiting the use of AI-based systems (such as various models like GPT, Gemini, Gemma, Llama, Claude, and others) for solving programming problems.“
Mike Mirzayanov, Founder, Codeforces
He also said that he appreciates the use of AI in case of learning or if a developer can get basic help using it. That’s why it is not a complete ban, but somewhat limited.
Here are things that are not-at-all all allowed on the platform:
- Developers can’t get ready-made code from AI.
- Even some description of the solution in natural language is prohibited.
- AI systems can’t be used to fix any issue in the code.
- No creation of algorithmic logic for reasoning.
Here’s what they officially said: “The use of AI tools that assist with problem understanding, logic creation, or decision-making in a way that substitutes your reasoning is strictly disallowed.“
However, some use of AI is permitted in the following cases:
- AI systems can be used to translate the problem statements (but with no summarization).
- Code Completion Tools like Copilot can be used to generate boilerplate code or small coding suggestions.
In any rated round, this rule is applicable for registered contestants and out-of-competition participants. But if the round is unrated, this rule might stated explicitly.
Some users are happy with the decision as it stops “cheaters” from adding AI-written code, while others don’t think it will have any impact on these cheaters. These cheaters usually get the code from ChatGPT and then convert it to another programming language or do a little modification to prevent it from being caught.
Conclusion
While Codeforces has taken a major step, it would be interesting to see how they will implement it. While they have a cheating detection system (If two contestants’ codes match and the matched code does not exist publicly on the internet before the competition round, this will be considered evidence of cheating), let’s see how it can enforced with new developments in AI coding.
Codeforces has committed to closely monitoring developments in AI technology. The platform plans to adjust its rules as needed to maintain a delicate balance between fair competition and the potential benefits of AI-powered learning in the programming world.