We have 1 position open for a Research Engineer to support the lab's work within the areas of Computer Vision and Deep Learning. The position is part of the Future Interaction Research Programme. Our topics of interest include but are not limited to:
:Contrastively:trained and auto:regressive Vision and Language (e.g. CLIP, BLIP).
:Visual LLMs (e.g. LLaVA).
:Generative Models (e.g. Stable Diffusion and Auto:Regressive models).
:Efficient Architectures.
:Model Compression (distillation, quantization).
:Efficient Adaptation of Large Models.
Key Responsibilities:
:On:device model deployment, quantization and optimization. Typical tools: ONNX and Qualcomm AI Runtime (QNN).
:Advanced PC quantization of models, e.g. SmoothQuant, AdaRound, etc.
:Data management.
:Support demos, typically on:device. Require: creation/adaptation of demo app running the phone, implementation of inference pipeline (C++), model porting and integration on the app, unit testing, etc.
:
Support of web:based tooling, e.g. creation of custom visualization or annotation tools.
:
Required skills:
:Python, C++, Bash.
:Experience with at least one of android or web app development.
:Able to learn the concepts and pipelines for on:device porting and quantization of models.
:
Good at working collaboratively as part of a team.
:
Desired skills:
:Docker, Java.
:Familiarity with porting tools such as ONNX/QNN.
:
Prior experience with quantization.