Location
Irving, TX, United States
Posted on
Nov 24, 2020
Profile
Our client is seeking a Data Engineer
to join their team in Irving, TX.
This role is for the development and eventual ownership of a new system (SSURM) the Safety and Soundness Unified Reporting and Monitoring System for NAM IT. This individual will work with existing resources to form the core team to build this platform from the ground up. They will be expected to provide creative solutions in a SCRUM/Agile work environment. When the Initial 1.0 Version is built, this individual will become one of two primary core owners of the platform and a support and development team will be built around their expertise.
This is an exciting opportunity to work on an important new platform, which will have huge impacts on the reporting and monitoring technology for safety and soundness (technology risk) business and our future architecture in this area.
Responsibilities
Act as the subject matter expert regarding data pipelines to the DevOps focused team and to external stakeholders
Analyze, code, test, and implement data solutions and controls
Building a close relationship with clients and stakeholders to understand the use cases for the platform, and prioritize work accordingly
Evaluation of data sourcing to a new platform and the building of the data models and sourcing structure to support that platform
This role will become a key owner of that safety and soundness technology platform as it evolves along with the development team
You will work with business stakeholders as end consumers of the data to ensure we are meeting their requirements
You will contribute to the team’s strategy around development and deployment of best practices
Qualifications
5 years of building solutions to improve or replace manual data sourcing processes
5 years of experience in building solutions with machine learning, graph analytics and other advance analytics techniques
In depth knowledge of data pre-processing, feature engineering and modeling
In depth knowledge of scalable model deployment, model performance monitoring stats and modelling pipeline automation
Hands-on experience with Python/Pyspark/Scala and basic libraries for machine learning is required
Hands-on experience with XGBoost, Tensorflow, scikit-learn, PySpark, Spark GraphX is desireable
Basic knowledge of the Hadoop ecosystem and Big Data technologies is a plus (HDFS, MapReduce, Hive, Pig, Impala, Spark, Kafka, Kudu, Solr) but not a requirement
We are also working with Anaconda and Jupiter notebook and Mongo DB, Oracle DB so working knowledge of those would be helpful
Proficient in programming in Java or Python with prior Apache Beam/Spark experience a plus
Continuous Integration / Scrum experience is desired
Experience in consumer banking, financial services, or risk controls domains would be ideal
Knowledge of agile(scrum) development methodology is a plus
Job keywords:
USA
Company info
Sign Up Now - FinancialServicesCrossing.com