Senior Data Engineer
By clicking the Apply button, I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda’s Privacy Notice and Terms of Use .
I further attest that all information I submit in my employment application is true to the best of my knowledge.
Job Description
About the role :
At Takeda, we are a forward-looking, world-class R&D organization that unlocks innovation and delivers transformative therapies to patients.
By focusing R&D efforts on four therapeutic areas and other targeted investments, we push the boundaries of what is possible in order to bring life-changing therapies to patients worldwide.
Join Takeda as a Senior Data Engineer where you will lead the implementation of development, testing, automation tools and IT infrastructure in line with SDLC principles and leading implementation of DevOps through the entire architecture.
You will report to the Head, Digital Platforms and Manufacturing Informatics Cell Therapy and will be a part of the Cell Therapy Engineering and Automation team.
How you will contribute :
Lead the implementation of various development, testing, automation tools and IT infrastructure in line with SDLC principles and leading implementation of DevOps through the entire architecture.
Specifically, lead continuous improvement and build continuous integration, development, and constant deployment pipelines.
Write code to develop and test the required solution and architecture features to enable seamless data pipelining and flow to many different end-applications such as front-end visualization platform, Tableau dashboards etc.
Test the code using the appropriate testing approach. Deploys software to production servers. Contributes code documentation, maintains playbooks, and provides timely progress updates
Define relational tables, primary and foreign keys, and stored procedures to create a data model structure. Evaluates existing data models and physical databases for variances and discrepancies
Oversee the active maintenance and improvement of current data warehouses and data pipelines.
Create training documentation and trains junior team members on data pipelining, CI / CD processes, as well as architecture implementation and testing as well as stipulates system troubleshooting support.
Actively communicate with scientists, analytical and process development leads, manufacturing, non-clinical and clinical teams to inform QbD (Quality-by-Design) and systems approaches, data analyses, and engineered improvements driving discovery and development.
Provide and support the implementation of engineering solutions by building relationships and partnerships with key stakeholders, determining and carrying out necessary processes, monitoring progress and results, recognizing and capitalizing on improvement opportunities, adapting to competing demands, organizational changes, and new responsibilities.
Demonstrate up-to-date expertise and apply this to development, execution and improvement of infrastructure setup and provide guidance to others by supporting and aligning efforts by multiple stakeholders.
Work in a matrixed environment by leading projects using a product mindset.
Minimum Requirements / Qualifications :
Master’s degree or higher in a quantitative discipline such as Statistics, Mathematics, Engineering, or Computer Science.
4+ years of experience working in data engineering role in an enterprise environment
Entry-level certification with AWS Cloud or prior experience with developing solutions on cloud computing services and infrastructure in the data and analytics space.
Strong experience developing, working in, and maintaining production data pipelines and production data warehouses
Strong understanding of Software Development Life Cycle (SDLC) as it applies to data systems and project planning / execution skills including estimating and scheduling.
Knowledge of professional software engineering practices & best practices for the full software development life cycle, including coding standards, code reviews, source control management (especially Github), build processes, testing, and operations
Prior experience working in healthcare / pharmaceutical industry.
Ability to read / write Python and R scripts for building data transformation pipelines
Demonstrated experience with a variety of relational database and data warehousing technology such as AWS Redshift, Athena, RDS, BigQuery
Demonstrated experience with big data processing systems and distributed computing technology such as Databricks, Spark, Sagemaker, Kafka, etc.
Solution-oriented enabler mindset and excellent problem-solving skills.
Demonstrated experience working at different levels of leadership in a technical project including individual contributor work.
Prior experience working with non-technical stakeholders to deliver working data content for consumption via adhoc analysis.
Strong interpersonal and communication skills (verbal and written)
Collaborative mindset and teamwork, with the ability to challenge and engage an audience towards better outcomes
Ability and prior experience navigating a challenging, matrixed organization
Ability and willingness to multi-task across a range of functional and technical contexts
Nice-to-have’s :
Prior experience with data analysis including aggregation, cross-data-set analyses, generating confusion matrices, exception finding, and data mapping
Ability to read / write ANSI-compatible SQL from scratch including selects and aggregate functions, DDL / DML
Prior experience in consulting or analytics project delivery through the entire software lifecyle : Requirements, Design, Testing, Deployment
Prior experience developing documentation for a data platform
Prior experience designing and implementing data architectures, including tables, views, facts, dimensions
Prior experience with Data Engineering projects and teams at an Enterprise level
Strong experience with ETL / ELT design and implementations in the context of large, disparate, and complex datasets
Prior experience managing, overseeing, and guiding junior data engineers.
Prior experience working in manufacturing facilities.
What Takeda can offer you :
Comprehensive Healthcare : Medical, Dental, and Vision
Financial Planning & Stability : 401(k) with company match and Annual Retirement Contribution Plan
Health & Wellness programs including onsite flu shots and health screenings
Generous time off for vacation and the option to purchase additional vacation days
Community Outreach Programs and company match of charitable contributions
Family Planning Support
Flexible Work Paths
Tuition reimbursement
More about us :
At Takeda, we are transforming patient care through the development of novel specialty pharmaceuticals and best in class patient support programs.
Takeda is a patient-focused company that will inspire and empower you to grow through life-changing work.
Certified as a Global Top Employer, Takeda offers stimulating careers, encourages innovation, and strives for excellence in everything we do.
We foster an inclusive, collaborative workplace, in which our teams are united by an unwavering commitment to deliver Better Health and a Brighter Future to people around the world.
This position is currently classified as "hybrid" in accordance with Takeda's Hybrid and Remote Work policy.
Base Salary Range : $102,200.00 to $146,000.00, based on candidate professional experience level. Employees may also be eligible for Short-term and Long-Term Incentive benefits as well.
Employees are eligible to participate in Medical, Dental, Vision, Life Insurance, 401(k), Charitable Contribution Match, Holidays, Personal Days & Vacation, Tuition Reimbursement Program and Paid Volunteer Time Off.
The final salary offered for this position may take into account a number of factors including, but not limited to, location, skills, education, and experience.
LI-Hybrid
LI-AA1
EEO Statement
Takeda is proud in its commitment to creating a diverse workforce and providing equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, gender expression, parental status, national origin, age, disability, citizenship status, genetic information or characteristics, marital status, status as a Vietnam era veteran, special disabled veteran, or other protected veteran in accordance with applicable federal, state and local laws, and any other characteristic protected by law.
Locations
Boston, MA
Worker Type
Employee
Worker Sub-Type
Regular
Time Type
Full time
Related Jobs
Senior Data Engineer
By clicking the Apply button, I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda’s Privacy Notice and Terms of Use .
I further attest that all information I submit in my employment application is true to the best of my knowledge.
Job Description
About the role :
At Takeda, we are a forward-looking, world-class R&D organization that unlocks innovation and delivers transformative therapies to patients.
By focusing R&D efforts on four therapeutic areas and other targeted investments, we push the boundaries of what is possible in order to bring life-changing therapies to patients worldwide.
Join Takeda as a Senior Data Engineer where you will lead the implementation of development, testing, automation tools and IT infrastructure in line with SDLC principles and leading implementation of DevOps through the entire architecture.
You will report to the Head, Digital Platforms and Manufacturing Informatics Cell Therapy and will be a part of the Cell Therapy Engineering and Automation team.
How you will contribute :
Lead the implementation of various development, testing, automation tools and IT infrastructure in line with SDLC principles and leading implementation of DevOps through the entire architecture.
Specifically, lead continuous improvement and build continuous integration, development, and constant deployment pipelines.
Write code to develop and test the required solution and architecture features to enable seamless data pipelining and flow to many different end-applications such as front-end visualization platform, Tableau dashboards etc.
Test the code using the appropriate testing approach. Deploys software to production servers. Contributes code documentation, maintains playbooks, and provides timely progress updates
Define relational tables, primary and foreign keys, and stored procedures to create a data model structure. Evaluates existing data models and physical databases for variances and discrepancies
Oversee the active maintenance and improvement of current data warehouses and data pipelines.
Create training documentation and trains junior team members on data pipelining, CI / CD processes, as well as architecture implementation and testing as well as stipulates system troubleshooting support.
Actively communicate with scientists, analytical and process development leads, manufacturing, non-clinical and clinical teams to inform QbD (Quality-by-Design) and systems approaches, data analyses, and engineered improvements driving discovery and development.
Provide and support the implementation of engineering solutions by building relationships and partnerships with key stakeholders, determining and carrying out necessary processes, monitoring progress and results, recognizing and capitalizing on improvement opportunities, adapting to competing demands, organizational changes, and new responsibilities.
Demonstrate up-to-date expertise and apply this to development, execution and improvement of infrastructure setup and provide guidance to others by supporting and aligning efforts by multiple stakeholders.
Work in a matrixed environment by leading projects using a product mindset.
Minimum Requirements / Qualifications :
Master’s degree or higher in a quantitative discipline such as Statistics, Mathematics, Engineering, or Computer Science.
4+ years of experience working in data engineering role in an enterprise environment
Entry-level certification with AWS Cloud or prior experience with developing solutions on cloud computing services and infrastructure in the data and analytics space.
Strong experience developing, working in, and maintaining production data pipelines and production data warehouses
Strong understanding of Software Development Life Cycle (SDLC) as it applies to data systems and project planning / execution skills including estimating and scheduling.
Knowledge of professional software engineering practices & best practices for the full software development life cycle, including coding standards, code reviews, source control management (especially Github), build processes, testing, and operations
Prior experience working in healthcare / pharmaceutical industry.
Ability to read / write Python and R scripts for building data transformation pipelines
Demonstrated experience with a variety of relational database and data warehousing technology such as AWS Redshift, Athena, RDS, BigQuery
Demonstrated experience with big data processing systems and distributed computing technology such as Databricks, Spark, Sagemaker, Kafka, etc.
Solution-oriented enabler mindset and excellent problem-solving skills.
Demonstrated experience working at different levels of leadership in a technical project including individual contributor work.
Prior experience working with non-technical stakeholders to deliver working data content for consumption via adhoc analysis.
Strong interpersonal and communication skills (verbal and written)
Collaborative mindset and teamwork, with the ability to challenge and engage an audience towards better outcomes
Ability and prior experience navigating a challenging, matrixed organization
Ability and willingness to multi-task across a range of functional and technical contexts
Nice-to-have’s :
Prior experience with data analysis including aggregation, cross-data-set analyses, generating confusion matrices, exception finding, and data mapping
Ability to read / write ANSI-compatible SQL from scratch including selects and aggregate functions, DDL / DML
Prior experience in consulting or analytics project delivery through the entire software lifecyle : Requirements, Design, Testing, Deployment
Prior experience developing documentation for a data platform
Prior experience designing and implementing data architectures, including tables, views, facts, dimensions
Prior experience with Data Engineering projects and teams at an Enterprise level
Strong experience with ETL / ELT design and implementations in the context of large, disparate, and complex datasets
Prior experience managing, overseeing, and guiding junior data engineers.
Prior experience working in manufacturing facilities.
What Takeda can offer you :
Comprehensive Healthcare : Medical, Dental, and Vision
Financial Planning & Stability : 401(k) with company match and Annual Retirement Contribution Plan
Health & Wellness programs including onsite flu shots and health screenings
Generous time off for vacation and the option to purchase additional vacation days
Community Outreach Programs and company match of charitable contributions
Family Planning Support
Flexible Work Paths
Tuition reimbursement
More about us :
At Takeda, we are transforming patient care through the development of novel specialty pharmaceuticals and best in class patient support programs.
Takeda is a patient-focused company that will inspire and empower you to grow through life-changing work.
Certified as a Global Top Employer, Takeda offers stimulating careers, encourages innovation, and strives for excellence in everything we do.
We foster an inclusive, collaborative workplace, in which our teams are united by an unwavering commitment to deliver Better Health and a Brighter Future to people around the world.
This position is currently classified as "hybrid" in accordance with Takeda's Hybrid and Remote Work policy.
Base Salary Range : $102,200.00 to $146,000.00, based on candidate professional experience level. Employees may also be eligible for Short-term and Long-Term Incentive benefits as well.
Employees are eligible to participate in Medical, Dental, Vision, Life Insurance, 401(k), Charitable Contribution Match, Holidays, Personal Days & Vacation, Tuition Reimbursement Program and Paid Volunteer Time Off.
The final salary offered for this position may take into account a number of factors including, but not limited to, location, skills, education, and experience.
LI-Hybrid
LI-AA1
EEO Statement
Takeda is proud in its commitment to creating a diverse workforce and providing equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, gender expression, parental status, national origin, age, disability, citizenship status, genetic information or characteristics, marital status, status as a Vietnam era veteran, special disabled veteran, or other protected veteran in accordance with applicable federal, state and local laws, and any other characteristic protected by law.
Locations
Boston, MA
Worker Type
Employee
Worker Sub-Type
Regular
Time Type
Full time
Data engineer
Our client is looking Data Engineer for Long Term project in Bloomfield CT, New York NY, Austin TX, Chicago IL (Initial Remote) below is the detailed requirements.
Job Title : Data Engineer
Duration : Long Term W2 Tax Term
Job description :
- Bachelor's degree in Computer science or equivalent, with minimum 9+ years of relevant experience .
- Must have experience with Pyspark, Python, Angular, SQL, Azure Databricks, Metadata.
- Knowledge of at least one component : Azure Data Factory, Azure Data Lake, Azure SQL DW, Azure SQL
- Expertise in ETL, API development, Microservices design and Cloud deployment solutions
- Experience in RESTful APIs using message formats such as JSON and XML
- Experience in integration technologies such as Kafka
- Experience in Python and frameworks such Flask or Django
- Experience in RDBMS and NoSQL databases
- Good understanding of SQL, T-SQL and / or PL / SQL
- Hands-on experience developing applications on AWS and / or Openshift
- Automation Skills using Infrastructure as Code
- Familiarity with creating web applications using AngularJS or React
- Familiarity with creating benchmark tests, designing for scalability and performance, and designing / integrating large-scale systems.
- Familiarity with building cloud native applications, knowledge on cloud tools such Kubernetes and Docker containers
- Demonstrate excellent communication skills including the ability to effectively communicate with internal and external customers.
- Ability to use strong industry knowledge to relate to customer needs and dissolve customer concerns and high level of focus and attention to detail.
- Strong work ethic with good time management with ability to work with diverse teams and lead meetings.
Data Engineer
Overview
LMI is a consultancy dedicated to powering a future-ready, high-performing government, drawing from expertise in digital and analytic solutions, logistics, and management advisory services.
We deliver integrated capabilities that incorporate emerging technologies and are tailored to customers’ unique mission needs, backed by objective research and data analysis.
Founded in 1961 to help the Department of Defense resolve complex logistics management challenges, LMI continues to enable growth and transformation, enhance operational readiness and resiliency, and ensure mission success for federal civilian and defense agencies.
This position is remote but may require travel to a client site in Washington, DC (Georgetown)*
Responsibilities
As a Data Engineer you will help develop and deploy technical solutions to solve our customers’ hardest problems, using various platforms to integrate data, transform insights, and build first-class applications for operational decisions.
You will leverage everything around you : core customer products, open source technologies (e.g. GHE), and anything you and your team can build to drive real impact.
In this role, you work with customers around the globe, where you gain rare insight into the world’s most important industries and institutions.
Each mission presents different challenges, from the regulatory environment to the nature of the data to the user population.
You will work to accommodate all aspects of an environment to drive real technical outcomes for our customers.
Core Responsibilities
- Setup transfers of data feeds from source systems into location accessible to Foundry and integrate with existing data utilizing enterprise architecture best practices
- Debug issues related to delayed or missing data feeds
- Monitor build progress and debug build problems in conjunction with deployment teams
- Using Foundry’s application development framework to design applications that address operational questions
- Rapid development and iteration cycles with SME’s including testing and troubleshooting application issues
- Executing requests for information (RFI’s) surrounding the platform’s data footprint
Qualifications
- Bachelor’s degree in data science, mathematics, statistics, economics, computer science, engineering, or a related business or quantitative discipline (Master’s degree preferred)
- Preferred : Interim or Active DoD Secret clearance.
- Strong engineering background, preferably in fields such as Computer Science, Mathematics, Software Engineering, Physics, or Data Science.
- Proficiency with programming languages such as Python (Pyspark, Pandas) SQL, R, JavaScript, or similar languages.
- Working knowledge of databases and SQL; preferred qualifications include linking analytic and data visualization products to database connections
- At least 9 years of experience in the field
- Ability to work effectively in teams of technical and non-technical individuals.
- Skill and comfort working in a rapidly changing environment with dynamic objectives and iteration with users.
- Demonstrated ability to continuously learn, work independently, and make decisions with minimal supervision.
- Proven track-record of strong customer communications including feedback gathering, execution updates, and troubleshooting.
LI-SH1
Options
Sorry the Share function is not working properly at this moment. Please refresh the page and try again later. Share on your newsfeed
LMI is an Equal Opportunity Employer. LMI is committed to the fair treatment of all and to our policy of providing applicants and employees with equal employment opportunities.
LMI recruits, hires, trains, and promotes people without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, pregnancy, disability, age, protected veteran status, citizenship status, genetic information, or any other characteristic protected by applicable federal, state, or local law.
If you are a person with a disability needing assistance with the application process, please contact
Need help finding the right job?
We can recommend jobs specifically for you!
Software Powered by iCIMS
Senior data engineer
At Fitch, we have an open culture where employees are able to exchange ideas and perspectives, throughout the organization, irrespective of their seniority.
Your voice will be heard allowing you to have a real impact. We embrace diversity and appreciate authenticity encouraging an environment where employees can be their true selves.
Our inclusive and progressive approach helps us to keep a balanced perspective. Fitch is also committed to supporting its employees by advancing conversations around diversity, equity and inclusion.
Fitch’s Employee Resource Groups (ERGs) have been established by employees who have joined together as a workplace community based on similar backgrounds or life experiences.
Fitch’s ERGs are available to connect employees with others within the organization to offer professional and personal support.
With our expertise, we are not only creating data and information, but also producing timely insights from every angle to influence decision making in this ever changing and highly competitive market.
We have a relentless hunger to innovate and unlock the power of human insights and to drive value for our customers. There has never been a better time to make an impact and we invite you to join us on this journey.
Fitch Ratings is a leading provider of credit ratings, commentary and research. Dedicated to providing value beyond the rating through independent and prospective credit opinions, Fitch Ratings offers global perspectives shaped by strong local market experience and credit market expertise.
The additional context, perspective and insights we provide have helped fund a century of growth and enables you to make important credit judgments with confidence.
At Fitch, we have an open culture where employees are able to exchange ideas and perspectives, throughout the organization, irrespective of their seniority.
Your voice will be heard allowing you to have a real impact. We embrace diversity and appreciate authenticity, employees work in an environment where they can be their true selves.
Our inclusive and progressive approach helps us to keep a balanced perspective.
With our expertise, we are not only creating data and information, but also producing timely insights from every angle to influence decision making in this everchanging and highly competitive market.
We have a relentless hunger to innovate and unlock the power of human insights and to drive value for our customers. There has never been a better time to make an impact and we invite you to join us on this journey.
Fitch is seeking a strong Data Engineer to improve critical data systems used widely by internal and external stakeholders.
The ideal candidate is someone who :
- Has 5+ years of data engineering experience developing large data pipelines
- Has Strong experience developing in Python.& Java
- Has Strong Experience with relational SQL and NoSQL databases
Roles & Responsibility
- Build data pipelines and applications to stream and process datasets at low latencies.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, NoSQL, Kafka using AWS Big Data technologies.
- Collaborate with Data Product Managers, Data Architects, and other Data Engineers to design, implement, and deliver successful data solutions.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
- Track data lineage, ensure data quality and improve discoverability of data.
- Work in Agile Environment (Scrum) and interact with multi-functional teams (Product Owners, Scrum Masters, Developers, Designers, Data Analysts)
Required Skills
- Strong experience developing in Python & Java.
- 5+ years of data engineering experience developing large data pipelines
- Strong SQL and NoSQL skills and ability to create queries to extract data and build performant datasets.
- Hands-on experience with message queuing and stream data processing (Kafka Streams).
Desirable Skills
- Experience with relational SQL and NoSQL databases, any RDBMS (Oracle, Postgres) and NoSQL (Cassandra, Mongo, or Redis, etc.).
- Hands-on experience with distributed systems such as Spark, Hadoop (HDFS, Hive, Presto, PySpark) to query and process data.
- Strong analytic skills related to working with unstructured datasets.
- Hands-on experience in using AWS cloud services : EC2, Lambda, S3, Athena, Glue, and EMR
- Redshift / Snowflake
- Experience in the Financial Services industry
Person specification
- Excellent problem solving and analytical skills
- Highly motivated to deliver results and meet deadlines
LI-CF1
DICE
Data Engineer
Posted byPython US RecruiterFourier has partnered with several World Leading Hedge Funds, Prop Traders, and Market Makers in a search for elite and eager Data Engineers to join them.
Our clients are looking for the best data engineers in the industry with a proven track record of delivering scalable and robust data systems and are driven by solving the seemingly unsolvable problems.
They are looking for individuals who are driven and motivated but most importantly - excited by Data!
Do you love working with Python and have experience managing ETL pipelines, building a scalable distributed data platform or deriving insights from alternative data sets?
Do you have an affinity for learning new technologies and always looking to broaden your existing technical knowledge?
These clients are at the pinnacle of finance, and therefore leadingpensation packages should be expected.
Primary Tech Stack :