Overview
An candidate will be a self-starter who is passionate about discovering and solving complicated problems, learning complex systems, working with numbers, and organizing and communicating data and reports. You will be detail-oriented and organized, capable of handling multiple projects at once, and capable of dealing with ambiguity and rapidly changing priorities. You will have expertise in process optimizations and systems thinking and will be required to engage directly with multiple internal teams to drive business projects/automation for the RBS team. Candidates must be successful both as individual contributors and in a team environment, and must be customer-centric. Our environment is fast-paced and requires someone who is flexible, detail-oriented, and comfortable working in a deadline-driven work environment.
Responsibilities
* Design, development and ongoing operations of scalable, performant data warehouse (Redshift) tables, data pipelines, reports and dashboards.
* Development of moderately to highly complex data processing jobs using appropriate technologies (e.g. SQL, Python, Spark, AWS Lambda, etc.).
* Development of dashboards and reports.
* Collaborating with stakeholders to understand business domains, requirements, and expectations. Additionally, working with owners of data source systems to understand capabilities and limitations.
* Deliver minimally to moderately complex data analysis; collaborating as needed with Data Science as complexity increases.
* Actively manage the timeline and deliverables of projects, anticipate risks and resolve issues.
* Adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation.
Basic Qualifications
* Bachelor’s degree in Computer Science, Information Technology, or a related field
* Proficiency in automation using Python
* Excellent oral and written communication skills
* Experience with SQL, ETL processes, or data transformation
* 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience
Preferred Qualifications
* Experience with scripting and automation tools
* Familiarity with Infrastructure as Code (IaC) tools such as AWS CDK
* Knowledge of AWS services such as SQS, SNS, CloudWatch and DynamoDB
* Understanding of DevOps practices, including CI/CD pipelines and monitoring solutions
* Understanding of cloud services, serverless architecture, and systems integration
* Experience with data-specific programming languages/packages such as R or Python Pandas.
* Experience with AWS solutions such as EC2, DynamoDB, S3, and EMR.
* Knowledge of machine learning techniques and concepts.
Additional Qualifications
* 3+ years of experience in data visualization and modeling with tools like Tableau, QuickSight, etc.
* Experience with data modeling, warehousing and building ETL pipelines
* Experience in statistical analysis packages such as R, SAS and Matlab
* Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.
#J-18808-Ljbffr