Senior Research Assistant (Emote VR Voicer Virtual Reality)

Employer: Manchester Metropolitan University

Location: Manchester, "ALL_SAINTS_CAMPUS", UK

Salary: Competitive salary

Job type: FULL_TIME

Posted: 2026-04-29T00:00:00Z

Sector: Science & Research

Job Description

The School of Digital Arts is a purpose built, interdisciplinary school at one of the UK's leading universities. Offering industry and research informed courses and specialist spaces with the latest technologies. The School of Digital Arts is a proud part of Manchester Metropolitan University. We build on the creative, science, tech and business strengths of a university whose research is rated as ‘world-leading' and is changing the way we live, work, learn and play. AI systems are increasingly able to detect a speaker's emotions, leading to a new affective channel that can be explored in art. The controls available in standard Virtual Reality (VR) can be supplemented with speech recognition, natural language processing, and sentiment analysis. We aim to embody this potential in the front end of the Emote VR Voicer interface, which translates the emotional meaning of vocal utterances to the morphing of abstract 3D animated shapes, enabling a radical new aesthetic experience. We are using iterative design cycles and ultimately aim to develop an interface that will improve the user's wellbeing. We are looking for candidates with the technical know-how to finalise a partially existing VR prototype. You will work from a design brief, using Unity scripting to finalise and make the app ready for exhibition and release on the Meta store. Working within the School of Digital Arts (SODA) you will join state-of-the-art research on the AHRC funded Emote VR Voicer project, to develop a new, intelligently responsive VR app that incorporates speech recognition and meaning classification. About the role: You will be working closely within a small project team consisting of artists, a psychologist and AI researchers in an iterative development cycle. You will be responsible for the VR development part and map detected emotion to visual animations using blend trees and procedural animation. You will also create UI elements and interactions between vocal input and visual output to appeal to singers and non-singers. You will use your programming skills and Unity experience to integrate AI models that detect and tag emotional meaning from audio and map these to steer real-time visuals in Unity. Live audio features will also be mapped to animate graphics. Working closely together with the project lead, you will create a system where the shapes are animated differently depending on which emotion the system detects. Image synthesis, procedural content generation and style transfer will further expand on a bank of 3D graphics that are created specifically for this project. The ideal candidate will have experience with the wider pipeline, including asset generation, rigging and animating. You will also be involved in some of the evaluation work for this project. The job will be for 2.5 days per week (0.5 FT) on a fixed-term basis for 8 months. The working pattern will be on-campus with some remote working possible depending on project stage. The days and hours to work can be negotiated with the successful candidate. About you: Key skills: A good understanding of programming within the Unity games engine using C# and experience with VR application development. Essential skills and experience: A degree in computer science, software engineering or a similar technical field, or equivalent professional experience Experience developing projects with C# Hands-on experience developing VR/XR applications using Unity Proficiency with scripting for procedural animation generation Experience with image and/or audio-based projects Experience with real-time system optimisation (e.g. low-latency audio/visual feedback in VR) Experience with working in interdisciplinary teams Excellent communication and interpersonal skills. Creative problem-solving skills Self-motivation and able to undertake independent research related to the brief Excellent ability to work to deadlines Desirable: A relevant postgraduate qualification. Experience with bringing Python models into Unity Proficiency with Autodesk Maya modelling, skinning and rigging Experience with writing and co-writing research papers Sensitive to nuances in visual aesthetics To apply, please submit your CV, a cover letter explaining how you meet the criteria and include a link to previous relevant work, and two named references via our application portal. If you would like to discuss the role, please email Adinda at: A.vant.Klooster@mmu.ac.uk Please note to be eligible candidates need to already have the right to work in the UK. Manchester Metropolitan University fosters an inclusive culture of belonging that promotes equity and celebrates diversity. We value a diverse workforce for the innovation and diversity of thought it brings and welcome applications from all local and international communities, including Black, Asian, and Minority Ethnic backgrounds, disabled people, and LGBTQ+ individuals. We support a range of flexible working arrangements, including hybrid and tailored schedules, w

Apply on jobs.ac.uk