
High-level information about the private work of students and staff using ChatGPT Edu at several universities can be viewed by thousands of colleagues across their institutions due to a misunderstanding of what is being shared, according to a University of Oxford researcher who identified the issue.
The problem affects Codex Cloud Environments in ChatGPT Edu and exposes the names and some metadata associated with the public and private GitHub repositories that users within a university have connected to their ChatGPT Edu accounts.
No private code or repository data was exposed to unauthorized users. Nevertheless, the metadata that is visible can still reveal a meaningful picture of users’ activity.
“Anyone at the university, or a large number of people at least—including me—can see a number of projects [people have] been working on with ChatGPT,” says Luc Rocher, an associate professor at the University of Oxford, who identified the issue and raised it with both the University of Oxford and OpenAI through responsible disclosure. He later approached Fast Company after what he felt was an inadequate response from both.
In addition to the projects, Rocher says he could see how many times users interacted with ChatGPT on a given project and when those conversations began. From that metadata, Rocher was able to piece together that an Oxford student was working on an article for submission using OpenAI’s tools—something the student confirmed when Rocher approached them.
“In terms of the width of different people that can access each other’s behavioural data, that is quite worrying,” says a separate University of Oxford researcher, who was granted anonymity by Fast Company to speak freely about their employer. However, the researcher acknowledges that the data exposure is internal and, while broad, limited in depth. “I suspect that might be why the data protection team haven’t reacted as quickly as if it was a public-facing thing.”
However, the researcher calls the institution’s lack of response “naïve.” They add: “There are reasons for researchers to have private repositories.”
The situation echoes a similar issue previously reported by Fast Company, in which users of OpenAI’s standard ChatGPT product were not clearly informed that sharing their conversations could allow those chats to be indexed by search engines. The company initially denied the problem, then removed the feature after backlash.
“It seems to me it’s a question of a bad default,” says Rocher, where users aren’t made immediately and obviously clear what they’re opting into.
An OpenAI spokesperson tells Fast Company: “Users are in full control of how their environments are shared. Repository names can be visible to other members of the same organisation only if chosen to be by the workspace owner, and repository contents remain secure.”
The spokesperson adds: “We have spoken with the customer directly about this question and always welcome their feedback.”
The University of Oxford declined to comment on the record. Fast Company understands Rocher has identified other universities—including at least one in the Middle East—affected by the same issue.
“I think this is something universities need to be made aware of,” says Rocher.
The situation highlights a broader tension around how AI products are being deployed, experts say.
“While it is not clear how much data is exposed by default by OpenAI, it is clear that the way that these systems are integrated is making information visible to both the firm and across the organisation that was not visible before,” says Michael Veale, professor of technology law and policy at University College London.
Veale says that dynamic reflects how AI systems operate. “It is a part of a broader trend of AI tools being integrated without accounting for the ways they transform who can see what information, and the difficulty, or even impossibility, of users reasoning what is going on behind the scenes,” he says. “By definition, AI systems query external services faster than humans can.”
That mismatch between AI capabilities and human oversight creates risks.
“Humans already have enormous difficulty keeping up with understanding what information is going where at the best of times,” says Veale. “Making that faster and more ubiquitous is only going to make that harder, and increase opacity and vulnerability to breaches and attacks in the process.”



