Loom is a new kind of work communication tool, already helping millions of people get their message across through instantly shareable videos. Our users work at companies like HubSpot, Square, Uber, GrubHub and LinkedIn. Our mission is to empower everyone at work to communicate more effectively and get ahead, wherever they are.
Founded in 2016, Loom has raised capital from top-tier investors including Sequoia, Kleiner Perkins, General Catalyst and Slack Fund.
Loom is the video communication platform for async work that helps companies communicate better at scale. Loom makes it easy to record quick videos of your screen and camera and instantly share them with a link. More than 20M users across 350k+ companies around the world trust Loom to share feedback, updates, intros, training, and more – every day. Founded in late 2015, Loom has raised $203M from world-class investors including Andreessen Horowitz, Sequoia, Kleiner Perkins, Iconiq, and Coatue.
We are seeking a highly skilled and motivated Applied AI Researcher specializing in video and audio AI to join our dynamic team. As an Applied AI Researcher, you will play a pivotal role in advancing our cutting-edge technologies in computer vision, natural language processing, and audio analysis. Your expertise will contribute to the development of innovative solutions that leverage machine-learning techniques to solve complex problems in the realm of video and audio analysis and transformations.
- Research and Development: Conduct state-of-the-art research in machine learning, deep learning, and artificial intelligence with a focus on video and audio analysis. Develop novel algorithms, models, and techniques to enhance video understanding, object recognition, activity recognition, speech recognition, audio classification, and other related areas.
- Data Analysis: Identify and analyze large-scale video and audio datasets to extract meaningful insights. Apply statistical and machine learning methods to understand patterns, trends, and relationships within the data, driving improvements in video and audio AI systems.
- Open-Source Models: Experience in understanding, adapting, and fine-tuning open-source machine learning models for specific tasks.
- Model Development: Design, implement, and evaluate machine learning models and architectures for video and audio AI applications. Explore and optimize various neural network architectures, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and transformer models, to achieve high accuracy and efficiency.
- Algorithm Optimization: Optimize and fine-tune video and audio AI algorithms for performance, scalability, and resource efficiency. Collaborate with software engineers to integrate developed models and algorithms into production systems or applications.
- Experimentation and Evaluation: Design rigorous experiments and conduct comprehensive evaluations to validate the effectiveness and robustness of video and audio AI models. Analyze experimental results and iterate on models and algorithms to achieve optimal performance.
- Collaboration and Communication: Collaborate with cross-functional teams, including researchers, engineers, and product managers, to understand requirements and translate them into actionable research projects. Present research findings, insights, and technical reports to internal stakeholders and external audiences through presentations, papers, or conferences.
- Stay Current: Keep up-to-date with the latest advancements in machine learning, computer vision, and audio processing research. Monitor industry trends and emerging technologies to identify opportunities for innovation and improvement within video and audio AI.
What We're Looking For
- Masters or Ph.D. in Computer Science, Electrical Engineering, or a related field with a focus on machine learning, computer vision, or audio processing.
- Proven track record of research experience in machine learning and AI, specifically in video and audio analysis.
- Strong programming skills in languages such as Python, MATLAB, or C++, along with experience using deep learning frameworks (e.g., TensorFlow, PyTorch).
- In-depth knowledge of machine learning techniques, including CNNs, RNNs, and other deep learning architectures.
- Proficiency in video processing, computer vision, and/or audio signal processing.
- Experience with large-scale video and audio datasets, data preprocessing, and feature extraction.
- Strong analytical and problem-solving skills, with the ability to develop creative and innovative solutions to complex challenges.
- Excellent written and verbal communication skills, with the ability to present research findings effectively.
- Strong publication record in leading conferences or journals in the field of machine learning, computer vision, or audio processing is a plus.