Data Engineer, Kraków
We have come a long way since our first currency feed 23 years ago. We are an award-winning global company offering leading currency solutions for both retail and corporate clients, from a tech start-up to a global corporation. Founded in 1996, we became the first company to share exchange rate information on the internet free of charge and in 2001, we launched a trading platform that helped pioneer the development of online-based trading around the world, enabling forex and CFD investors the ability to trade the financial markets. Our vision is to transform how our clients can meet all of their currency needs with innovative and award-winning solutions. Under new ownership with significant ambitions to grow the business on the global stage, we are looking for highly motivated, passionate individuals who want to make a mark in a dynamic environment.
We are looking for an exceptional data engineer to help us build our next generation data platform on top of Google Cloud, driving its growth and integration within the company. This platform will support our internal analytics and personalization needs, while enabling future business opportunities.
This role may be for you if :
- You view the present world as a pool of data and believe it is your job to create the tools and platforms needed to process it all.
- You believe in a future where every business is data driven, making insightful decisions based on the data available, rather than being lost and dragged down by it.
- When you hear the words ETL you are curious about the past we are living in since you process everything in real-time, using the latest technology.
- You wonder if you will ever use the challenging algorithms you've learned in your Computer Science background on a day to day basis, or if it was just all for theory.
- You thrive in a fast-paced, highly collaborative team of 3 to 5 engineers.
During a typical week you might :
- Build a data pipelines using DataFlow.
- Explore a new managed service for better perfomance and simplification of a data pipeline.
- Participate in meetings to collect reporting requirements for an application that will need to access information from the Cloud externally.
- Suggest a design to better ogranize the tables inside the data warehouse.
- Develop a data quality validation framework used for testing and real time production data.
- Discuss the computationally expensive problems raised by the data science team.
- Design tools enabling easy-to-use workflows for internal teams using the data platform
We are looking for :
- 5+ years of experience designing and implementing large scale software.
- 2+ years of experience working with Big Data technologies like DataFlow, BigQuery, Kafka, Airflow and Spark.
- Experience designing and deploying a real-time Big Data platform.
- Experience with data warehouse design.
- Strong coding ability in an object oriented language (preferably Scala, Python, Java).
- Excellent team player with strong communication skills (verbal and written).
- Enthusiastic about collaborative problem solving.
- Bachelor’s degree or better in Computer Science.
Extra points if you have :
- Experience with Streamsets.
- Using Looker, Fivetran or Mulesoft.
- Google BigQuery.
- Apache NiFi.
- Strong SQL and data modelling skills.
- Experience with other pipeline monitoring tools.
OANDA Global Corporation is a diverse and global team with offices around the world. We value the unique skills and experiences each individual brings to OANDA. We are committed to creating and sustaining a collegial work environment in which all individuals are treated with dignity and respect and one which reflects the diversity of the community in which we operate. We provide an inclusive and accessible environment for everyone. Candidates selected for an interview will be contacted directly. If you require accommodation during the recruitment and selection process, please let us know. We will work with you to provide as seamless a recruitment experience as possible.