Artificial intelligence: AU’s principles for GAI still apply

New generative AI tools like DeepSeek have prompted many employees to ask which tools they’re allowed to use – and what type of data they may upload to them. Peter Bruun Nielsen, deputy director of AU IT, confirms that staff should continue to follow AU’s seven principles for work-related use of GAI until new guidelines are issued.

[Translate to English:] En hånd holder en telefon med et billede af et chatbot-ikon på skærmen. Ud fra skærmen går 4 talebobler.

In recent months, there has been renewed debate over the many GAI tools coming onto the market, spurred on by Chinese company DeepSeek, which launched its first free chatbot in January this year.

Many employees at AU are interested in what the different tools can do, but they’re also keen to know which tools they’re allowed to use and what type of data they can safely upload to them.

AU IT is following developments closely and consulting with external experts on the different GAI tools available – not only the new DeepSeek chatbot but also other services like Microsoft Copilot, on which the government’s IT advisory council and the Agency for Public Finance and Management are currently conducting a risk assessment. Until the authorities issue new official guidelines, Deputy Director for AU IT Peter Bruun Nielsen confirms that AU employees should continue to adhere to AU’s principles for work-related use of GAI, which were launched in August last year.

“The most important principle is not to use GAI for confidential or sensitive personal data. If we stick to this rule, it’s still possible to experiment with new GAI tools and explore how they can help us perform our professional tasks. But it’s vital we maintain our critical judgement with regard to the data we put into GAI tools,” says Peter Bruun Nielsen.

Peter Bruun Nielsen also leads the working group that developed the seven principles for using GAI at the university. Since then, a new working group has been set up to collect insights from across the university with a view to developing more specific rules and frameworks for how GAI can be used most effectively at AU. But the speed of development is challenging, explains Peter Bruun Nielsen.

“New GAI products are constantly entering the market, and the technology behind them is evolving at an enormous pace, whilst also changing the work processes we’re familiar with. This makes it difficult to set out more specific principles than those we currently have if we still want to leave room to explore how we can use GAI in our everyday work,” says Peter Bruun Nielsen, who emphasises that he and his team are keeping a close eye on the reports and risk assessments issued by the official authorities and that AU is prepared to adjust its guidelines or limit access to services if recommended by the authorities.

Your guide to GAI: Seven principles you should know

  1. Don’t use GAI for confidential or sensitive personal data
  2. Remember that you are responsible for the quality of the content you use or share
  3. Credit your use of GAI so that others know how and why you have used it
  4. Be critical when using GAI as a source of information – it can make mistakes
  5. Be sensitive to bias in GAI-generated content – it often reproduces the biases in the data it’s trained on
  6. Only use GAI when standard search machines won’t do – it’s better for the climate
  7. Coordinate purchases of licenses for GAI applications with AI IT and AU Finance – to keep costs down and keep our data safe

Read more about these principles at medarbejdere.au.dk/en/gai.

Unsure about what types of data you can upload to GAI? Read more about the different types of data.