Advances in machine learning, such as transformers, have enabled developers to train models on massive datasets to create generative artificial intelligence (AI) chatbots. A pivotal moment in this ongoing transformation was last year’s release of OpenAI’s ChatGPT, which quickly gained popularity for its impressive performance on natural language processing tasks. Its release was followed by the emergence of a number of other AI assistants that are being used to perform a variety of tasks. In research, these tasks include helping to design experiments, enhancing literature reviews, improving writing, summarising and even brainstorming. On the one hand, it is clear that these tools could speed up the development of research. On the other hand, the integration of AI into research practices is not without its ethical challenges. Some argue that AI tools in research could introduce biases and inaccuracies that reduce the validity of scientific knowledge. As banning the technology seems unrealistic, institutions should focus on ensuring and promoting the responsible use of generative AI. This will require the engagement of a wide range of scientific stakeholders, the development of best practice guidelines that keep pace with the technology, and financial investment in the training of scientists. In this workshop, I will present a roadmap for the responsible use of AI in research. The steps include understanding how generative AI works, understanding current ethical concerns regarding the use of AI, applying these tools to research tasks, and validating AI outputs. This workshop is a teaser of the Digital Research Academy course on ‘Boosting Research Productivity with Artificial Intelligence’. At the end of the session, participants will be able to understand the AI landscape in research and apply these tools responsibly and effectively to scientific tasks.