The industry that has highly benefitted from Generative AI is the education and learning sector. Taking a step forward, at the I/O Conference, Google announced their generative AI-based tool named LearnLM to enhance the overall learning experience and engage in deeper research with AI.
Highlights:
- Google introduces LearnLM-Tutor, a family of AI models to improve deep and engaged active learning for learners.
- Developed on a variety of principles and datasets, obtained from both qualitative and quantitative criteria.
- Available on various Google Products such as YouTube, Google Search and Gemini Chat.
LearnLM is Fine-Tuned Gemini for Learning
LearnLM is a family of models based on Google Gemini, fine-tuned mainly for learning. A collaborative effort between Google DeepMind, Google Research, and their product teams, LearnLM aims to enhance the personalization, usefulness, and engagement of learning experiences.
Teachers and students regularly choose LearnLM-Tutor over a prompt-tuned Gemini on a variety of pedagogical parameters. It’s a great conversational AI tutor that can improve learning for students by giving them immediate feedback and tailoring to their specific requirements and teachers by increasing their influence and reducing their workload.
“Here we present our work collaborating with learners and educators to translate high level principles from learning science into a pragmatic set of seven diverse educational benchmarks, spanning quantitative, qualitative, automatic and human evaluations; and to develop a new set of fine-tuning datasets to improve the pedagogical capabilities of Gemini, introducing LearnLM-Tutor.”
The researchers fine-tuned the LearnLM tutor family of models for 1:1 conversational tutoring and further showed that they can improve its education-related capabilities over a Gemini 1.0 which has been iteratively prompt-tuned.
Let’s take a deeper look at how the researchers developed this state-of-the-art model.
Architecture for LearnLM-Tutor
Using both human and automated evaluations, the researchers created a complete set of seven teaching criteria, both quantitative and qualitative, with the goal of evaluating conversational AI tutors’ performance from a variety of perspectives.
They then used LearnLM-Tutor and a prompt-tuned Gemini 1.0 to apply these assessments, producing a portfolio of proof for advancement in pedagogy. These assessments helped in scenario guiding and improving the overall learning efficiency.
Additionally, the researchers talked about more focused assessment examples and explained how we use them to build particular instructional capabilities for LearnLM-Tutor, such as practice evaluation and feedback on procedural homework problems.
Let’s take a look at them step by step:
Participatory Workshops
In the UK, the researchers ran two interactive workshops: one for learners, who were mostly university students from a variety of academic backgrounds (𝑛 = 60), and another for educators, who were mostly STEM-focused high school teachers (𝑛 = 34).
They focussed on these 2 main things:
- Exercise for grounding: This activity investigated the educational experiences of participants, highlighting present requirements, difficulties, and possible areas for advancement with reference to gen AI technologies.
- Speculative design: In this exercise, participants were asked to picture a situation in which a learner was dealing with a variety of difficulties. They investigated how AI and social factors could either aggravate or lessen these issues through cooperative brainstorming.
Learners and instructors both appreciated personalized tuition, whether provided by AI or humans. Tutors are most successful when they are familiar with the student and can modify their style accordingly. These workshops were essential to get an outline of the real-world learning standards to train LearnLM-Tutor.
Initial Learning Interviews
In order to begin the iterative process of participatory design for LearnLM-Tutor, an exploratory set of user-centred studies involving educators and learners was carried out by the researchers.
To better understand the learning process and requirements of potential users, they enrolled three adult learners in Codecademy’s “Learn Python 3” course who had an innate interest in Python coding.
Using a chat interface, the study team members pretended to be AI tutors during the sessions, conversing one-on-one with each learner as though they were speaking with an actual AI system.
Six researchers and educators who focus on the nexus of learning science and artificial intelligence were interviewed one-on-one in parallel. The purpose of these interviews was to learn more about the potential advantages and difficulties that educators see with the use of gen AI tutors in the classroom.
They derived several principles that helped shape the design for LearnLM-Tutor. They are as follows:
- Do not give away solutions prematurely. Encourage learners to come up with solutions.
- Make explanations easy to understand, for example by making connections to the real world.
- Be encouraging. Celebrate learner progress and embrace mistakes as learning opportunities.
- Recognise when learners are struggling, and proactively check in with them.
- Ask questions to determine learner understanding and misunderstanding.
- Explain step-by-step, and deconstruct to teach thought processes.
ShiffBot’s Co-Design Activities
The Shiff Bot4 educational AI experiment is another collaborative effort that provided insights into the creation of LearnLM-Tutor. It employs a “start with one” approach, a co-design framework centred on a single individual with the aim of creating AI technology that can benefit both them and their community. From that initial point, it then generalizes.
By actively involving the participant as a partner and stakeholder in the creative process, the “start with one” method is consistent with participatory approaches from contextual inquiry and user-centred design. Working with a single subject allowed the larger research team to have a comprehensive grasp of the demands and obstacles that may arise in real-world contexts.
Daniel Shiffman is a teacher, professor at NYU, YouTube creator, and programmer. He participated in the ShiffBot project.
The goal of the ShiffBot project was to investigate how future artificial intelligence (gen AI) might benefit teachers and students. The development of LearnLM-Tutor incorporated the fundamental ideas from the Shiff Bot project, such as emphasizing grounded interactions, with the sole deviation being an attempt to emulate Daniel’s demeanour and delivery method.
LearnLM integrated in Google Products
Google is improving learning experiences with LearnLM in products you already use, like as Search, YouTube, and Gemini chat, so they can assist you in knowing more deeply than just providing answers. Here are a few instances:
Soon, you’ll be able to decipher complicated subjects in Google Search by selecting a button to customize your AI Overview into a format that works best for you. You can choose to break down or simplify the text.
Also you’ll be able to employ Gems in conversations with Gemini; these are personalized versions of Gemini that can serve as your own subject matter experts. One of the pre-made Gems, the learning coach, can help you increase your knowledge by offering practical practice exercises like games and quizzes in addition to step-by-step study advice.
The Gemini learning coach will go live in the upcoming months, and you can further tailor this Gem to your own learning style with Gemini Advanced.
With the help of a conversational AI tool on YouTube, you can virtually “raise your hand” while viewing educational videos to ask questions, receive helpful answers, or take a quiz on the material you’ve been learning.
This is even applicable to longer instructional videos, such as lectures or seminars, because of the long-context capabilities of the Gemini model. A small number of American Android users have already started to receive these functionalities.
People can use Circle to Search on Android devices to solve maths and physics word problems directly from their phones and tablets when they’re stuck. You’ll be able to go through even trickier issues including graphs, diagrams, and symbolic formulas later this year.
Conclusion
LearnLM-Tutor is an excellent step forward from Google in narrowing the gap between AI and learning. Having an AI that understands the outline of real-world learning and what learners want from a one-on-one interaction with a tutor, is what everyone needs nowadays.