About the Role
This is a hands-on role focused on applying AI techniques to both geospatial and non-geospatial data, 3D workflows, and internal datasets to drive automation, insights, and new product capabilities. This presents an opportunity to contribute to novel research into developing spatial intelligence models, as well as the practical application of existing cutting edge techniques.
Responsibilities
* Collaborate with cross-functional teams including Product, Engineering, and Data Engineering to translate business challenges into scalable ML solutions.
* Collaborate with external AI specialists to deliver larger-scale projects.
* Explore, clean, and transform large datasets for training, validation, and inference, including geospatial data types.
* Design, test, and deploy predictive models and recommendation systems into production environments.
* Identify and develop opportunities for AI-powered automation and decision-making tools.
* Contribute to the architecture of intelligent data pipelines that support both internal operations and customer-facing features.
* Continuously evaluate model performance and apply strategies for tuning, retraining, or improvement.
* Participate in product and project planning to help prioritize AI-driven initiatives.
* Mentor team members by sharing ML best practices and helping to build a culture of technical excellence.
* Stay current with emerging ML techniques and technologies, and evaluate their potential for our use cases.
* Support team delivery and maintain clear communication on changing goals, plans, or requirements.
Qualifications
* Hands-on experience working with LLM such as GPT, LLaMA, Claude, or similar, including fine-tuning, prompt engineering, and deploying models in production environments.
* Strong proficiency in Python, with expertise in using frameworks like Hugging Face Transformers, LangChain, OpenAI APIs, or other LLM orchestration tools.
* A solid understanding of tokenization, embedding models, vector databases (e.g., Pinecone, Weaviate, FAISS), and retrieval-augmented generation (RAG) pipelines.
* Experience designing and evaluating LLM-powered systems such as chatbots, summarization tools, content generation workflows, or intelligent data extraction pipelines.
* Deep understanding of NLP fundamentals: text preprocessing, language modeling, and semantic similarity.
* Strong proficiency in Python, including use of ML libraries such as TensorFlow, PyTorch, or similar.
* Experience with data science tools and platforms (e.g., Jupyter, Pandas, NumPy, MLFlow).
* Familiarity with cloud-based AI tools and infrastructure, especially within the AWS ecosystem.
* Strong understanding of data structures, algorithms, and statistical analysis.
* Experience working with ETL pipelines and structured/unstructured data.
* Must be available to attend quarterly company meetings in person.