Skip to main content
Search

Artificial intelligence and research

The Use of Artificial Intelligence in Research

 

Purpose

The purpose of this guideline is to help researchers and responsible research leaders interpret the guidelines and relevant legislation. It applies to all AI applications, although some sections specifically address generative AI applications, such as those based on large language models (LLMs). 

Using artificial intelligence in research brings both benefits and responsibilities. Researchers or research groups must comply with numerous guidelines, regulations, and laws that affect the use of AI. The principal investigator (PI) is accountable in their projects for ensuring adherence to the principles of research ethics, research integrity, applicable AI legislation, AI policy of Tampere Universities, and other AI-related guidelines at the university. In addition, they are responsible for ensuring that the use—and any potential development—of AI is properly planned, implemented, and documented.

At Tampere Universities, AI systems developed and used must, in principle, meet the ethical requirements for trustworthy AI. Ethical use of AI and ethically conducted AI research broadly consider issues of cybersecurity, data protection, and ethical questions related to accountability, responsible information sharing, and transparency. Social and other impacts related to sustainability and accountability, such as environmental effects, should also be considered. Researchers are required to assess the ethical aspects of their research when planning their studies, integrating AI-related perspectives into this ethical self-assessment.

 

Responsible Use of AI

The use of AI in research must be responsible. The researcher is accountable for both the use of AI and the integrity of the content produced.

The use of AI in research must be responsible. The researcher is accountable for both the use of AI and the integrity of the content produced. In scientific research, AI systems that incorporate randomness or stochastic elements (such as large language models) can pose challenges for reproducibility.

Researchers should remain critical and aware of potential biases in AI-generated outputs. The data used to train an AI model, and its curation may not always be transparent or impartial. For instance, large language models (LLMs) are largely trained on texts available online, which tend to represent Western and English-language perspectives. Similarly, the standard datasets used to assess the performance of various machine learning models might represent only a limited sample or may contain biases in their annotations. Such biases can lead to unfair treatment during research evaluations (e.g., in grant applications and peer review), even if the evaluation guidelines permit AI use.

Only a human can be recognized as an author of research outputs. AI cannot bear responsibility for outputs and therefore cannot be considered an author. Researchers must also assess the risks associated with erroneous, imprecise, or irrelevant outputs, and they must verify the appropriateness and accuracy of AI outputs—especially those produced by generative AI. The researcher remains fully responsible for any published results and the overall impact of their research.

As a general rule, the use of AI should be reported openly, particularly if it is employed extensively. The impact of AI on the research process should be evaluated, and the tools used should be clearly identified—this may include detailing the tool’s name, version, and intended purpose. It may also be necessary to specify the inputs provided to and the outputs produced by AI applications. In general, researchers should adhere to the principles of open science.

Researchers must critically assess which materials may be input into AI tools. Unpublished and sensitive information should not be entered into AI systems unless it is certain that the material will not be used for example, for training language models. The system must also meet the necessary cybersecurity requirements to protect the input data.

Please note that material produced by AI systems—such as text or images—may constitute direct or paraphrased reuse of content from the data used in the AI’s training. Researchers must respect the authorship of others, properly reference existing works, and uphold intellectual property rights.

Additionally, the outputs generated by AI systems cannot be protected by copyrights.

In general, researchers should understand the technical and ethical implications of the tools they use regarding privacy, confidentiality, intellectual property, and copyright—considering the tool’s privacy settings, who controls it, where computations are carried out, and how these factors may affect the input data, and also the use outputs.

 

Legal Requirements

The applicable laws must be complied with, even and especially when utilizing AI. The use of AI in research—and research on AI systems—is subject to various types of legislation. One of the most important is the EU’s so-called AI Act.

The applicable laws must be complied with, even and especially when utilizing AI. The use of AI in research—and research on AI systems—is subject to various types of legislation. One of the most important is the EU’s so-called AI Act, which sets the conditions for using AI in the EU. Additionally, regulations related to intellectual property, product liability, and data protection must be observed. Potential dual-use (i.e., use for both civilian and military purposes) must be identified and the applicable legislation followed. For legal issues, it is advisable to consult legal services at lakiasiat [at] tuni.fi (lakiasiat[at]tuni[dot]fi) .

The AI Act provides certain exceptions to its requirements if AI is used solely for scientific research or R&D activities. The regulation does not apply to AI systems or models that have been specifically developed and deployed solely for the purposes of scientific research and development, nor to their outputs. In other words, the broad requirements of the AI Act do not apply if an AI system is developed and used exclusively for scientific research purposes. For example:

  • an AI system used only in scientific research and subsequent studies for detecting heart failure
  • an AI system developed solely for research purposes that can distinguish background noise from speech and transcribe interviews.

For product-oriented research, testing, and development activities related to AI systems or models, the AI regulation does not apply until these systems and models are deployed or brought to market. For instance:

  • a contract research project that tests an AI model for predicting the progression of heart failure prior to its market launch;
  • developing an AI model that improves the functioning of an autonomous vehicle as part of a product development process (prior to implementation).

The abovementioned exception does not relieve the obligation to comply with the AI Act once the system is implemented or commercialized as a result of research and development activities. The exceptions for R&D-related activities in the AI regulation do not exempt compliance with other legal obligations, such as adherence to the GDPR.

Additionally, the AI Act does not apply to open-source (free) AI systems, except if they are brought to market or deployed:

  • as high-risk AI systems;
  • as systems falling under prohibited practices; or
  • as systems generating synthetic audio, image, video, or text content;
  • as systems for emotion recognition or biometric identification;
  • as systems used for deepfake production or manipulation; or
  • as systems producing or manipulating text related to matters of public interest.

In practice, the open-source exception is quite limited, especially concerning prohibited practices and high-risk AI systems.

 

It is essential to note that the AI Act still applies to AI systems that can be used (also) for R&D purposes, in addition to other purposes. 

It is essential to note that the AI Act still applies to AI systems that can be used (also) for R&D purposes, in addition to other purposes. For example, ChatGPT, Microsoft Copilot, and other language model–based applications fall under this category when used in research or development activities. All research and development activities must adhere to recognized research ethics and research integrity principles as well as applicable legislation.

The AI Act lists certain prohibited practices, such as the use of manipulative techniques, exploitation of vulnerabilities, social scoring, predictive law enforcement, emotion recognition in workplaces and educational institutions, indiscriminate scraping of images, real-time biometric identification, and classification systems that use biometric or other especially sensitive personal data. AI systems that fall under these categories must not be developed, deployed or brought to market in the EU. However, research on such systems is permitted, provided that guidelines related to cybersecurity, data protection, and ethics are followed. Conducting research securely may require, for example, a dedicated device or environment. In such cases, please contact cybersecurity (tietoturva [at] tuni.fi), data protection (tietosuoja [at] tuni.fi), or legal services (lakiasiat [at] tuni.fi).

In addition to the AI Act’s requirements, legal requirements related to intellectual property must be considered. Copyright-protected publications should not be used as inputs for AI tools in cases where the work might be used for training the language model or where the AI system provider would acquire rights to it. If you are planning to upload unpublished research results or material including intellectual property rights of the university into on AI tool, you must be extremely cautious, especially if the tool is not approved by the university.

When processing personal data, compliance with the EU General Data Protection Regulation (GDPR) and the Tampere University’s data protection guidelines is mandatory. If personal data is collected or used, a data protection impact assessment must be conducted before beginning to use AI tools in a project. From the perspective of data protection legislation, using AI tools may present specific risks (for example, unintended reuse if data remains within the language model). Furthermore, using AI in contexts such as selecting research subjects might result in profiling. When informing research subjects, it is important to describe data processing carried out using AI in sufficient detail. For more information, see the guidance on data protection in research. 

 

Practical Application

Below are practical examples of how AI can be utilized at different stages of research, along with key considerations. In all cases, sustainability and accountability should be taken into account.

 

  1. 1

    Planning

    Examples:

    • Collecting sources and writing a literature review
    • Developing the research idea or identifying research gaps
    • Brainstorming

    Considerations:

    • Always verify AI outputs (e.g., check whether cited sources actually exist; make sure that you understand the source material to confirm that the output is correct)
    • Ensure that any data you input is not inadvertently leaked through the tool
    • Verify that original ideas you formulate with the help of AI are truly original
    • Assess ethical issues and legal prerequisites related to AI use and potential development (do an ethical self-assessment and find out legislative requirements)
  2. 2

    Grant Application

    Example(s):

    • Preparing a grant proposal

    Considerations:

    • Follow the funding agency’s principles regarding AI
    • Report the use of AI openly
    • Describe the role of AI in the research and its impact
  3. 3

    Conducting Research

    Example(s):

    • Designing surveys
    • Creating programming code
    • Data curation and analysis

    Considerations:

    • Consider that question formulations might derive from previous surveys (ensure you get the data you need and minimize the collection of personal data)
    • Verify and test any code generated
    • Assess the impact on the quality and reliability of outputs
    • Document the use of AI if it has impacts: save the prompts for later review
  4. 4

    Publication

    Example(s):

    • Proofreading and revising language
    • Rewriting or summarizing text
    • Drafting list of references or formatting it
    • Publishing or further developing AI software for commercial use

    Considerations:

    • Adhere to the publication channel’s guidelines on AI use and reporting
    • Report the use of AI in sufficient detail so that readers can assess the significance of the tool’s influence on the reliability of the research
    • Check the references and citations carefully
    • Note that AI outputs cannot be copyrighted
    • When publishing or further developing AI for commercial use, ensure that the application meets its intended requirements
    • Consider potential dual-use scenarios: could malicious actors repurpose the outputs for harmful or even military applications?
  5. 5

    Data Storage

    Example:

    • Storing data for future use
    • Using AI to generate metadata

    Considerations:

    • Metadata should accurately document how AI was utilized in producing and curating the data
    • Check the accuracy of AI-generated metadata
  6. 6

    Expert and Evaluation Tasks

    Example(s):

    • Evaluating research or researchers (e.g., in grant applications and scientific publications)

    Considerations:

    • Refrain from excessive use of AI in sensitive tasks that could impact other researchers or organizations
    • Do not input unpublished outputs of others into systems where data security and protection cannot be guaranteed

Background

This AI guideline complements the Finnish Code of Conduct for Research Integrity issued by the Finnish National Board on Research Integrity (TENK) as well as the general research ethical principles addressed, for example, in Guidelines for ethical review in human sciences. Our Tampere Universities community is committed to these guidelines, and every researcher is responsible for adhering to them when using AI. Furthermore, this guideline supplements the Tampere University Community’s AI policy.

This guideline is partly based on the European Commission’s and ERA Forum’s published guideline, “Living guidelines on the responsible use of generative AI in research”, which describes how good scientific practice should be applied in the use of generative AI. This guideline has been expanded to include additional examples of AI use in research and to incorporate legislative requirements that also pertain to AI research. In part, the University of Helsinki’s guideline on the use of generative AI in research has served as inspiration.

Sources

AI Policy (intra.tuni.fi)

Research Ethics (intra.tuni.fi)

The Finnish Code of Conduct for Research Integrity and Procedures for Handling Alleged Violations of Research Integrity in Finland (tenk.fi)

Guidelines for Ethical Review in Human Sciences (tenk.fi)

Artificial Intelligence (AI) in Science - European Commission (ec.europa.eu)

Living guidelines on the responsible use of generative AI in research | Research and innovation (ec.europa.eu)

The Use of Generative Artificial Intelligence in Research | University of Helsinki (helsinki.fi) 

Ethics Guidelines for Trustworthy AI | Shaping Europe’s Digital Future (ec.europa.eu)

Ethics of Artificial Intelligence | UNESCO (unesco.org)

Regulation (AI Act) - EU - 2024/1689 - EN - EUR-Lex (eur-lex.europa.eu)

Artificial Intelligence – Questions and Answers (ec.europa.eu)

Self-study materials 

The University of Helsinki and MinnaLearn have developed an online course that provides a good foundational understanding of AI.

Elements of AI (elementsofai.com)

A presentation about the AI Act and how to apply it in scientific research by a university lawyer:

AI Act & Scientific Research 2025-05-28 (recording)

Published: 12.3.2024
Updated: 3.10.2025