Software Engineer | Data Platform
Ramp is building the next generation of finance tools-from corporate cards and expense management, to bill payments and accounting integrations-designed to save businesses time and money with every click.
Over 12,000 customers cut their expenses by 3.5% per year and close their books 8x faster by switching to the Ramp platform.
Founded in 2019, Ramp powers the fastest-growing corporate card and bill payment software in America and enables billions of dollars of purchases each year.
Ramp continues to grow quickly, more than doubling its revenue run rate in the first half of 2022.
Valued at $8.1 billion, Ramp's investors include Founders Fund, Stripe, Citi, Goldman Sachs, Coatue Management, D1 Capital Partners, Redpoint Ventures, General Catalyst, and Thrive Capital, as well as over 100 angel investors who were founders or executives of leadingpanies.
The Ramp teamprises talented leaders from leading financial services and fintechpanies-Stripe, Affirm, Goldman Sachs, American Express, Mastercard, Visa, Capital One-as well as technologypanies such as Meta, Uber, Netflix, Twitter, Dropbox, and Instacart.
Ramp was named Fastpany's #1 Most Innovativepany in North America in 2023 and #5 on LinkedIn Top Startups 2022.
About the Role
The Data Platform team develops and owns the systems that enable Ramp's reporting and strategic decision-making, as well as integrating machine learning models into our Risk systems and the product.
As a member of the Data Platform team, you'll build and maintain the infrastructure that enables Ramp to realize value from data.
You'll also partner with Ramp's analytics engineers, data scientists, and other data professionals to build internally and externally facing data products.
Our ideal candidate is excited about building systems for data collection, processing, storage, and retrieval, and is also passionate about making these systems observable, reliable, scalable, and highly automated.
What You'll Do
- Build and integrate theponents of Ramp's Analytics Platform and Machine Learning Platform
- Build tools that improve the agility and data experience of Ramp's Data Scientists, Analytics Engineers, Engineers, and Operations teams
- Build the batch and streaming data pipelines critical to Ramp's daily operations using Airflow, Snowflake, Materialize, and other data processing technologies
- Collaborate with stakeholder teams on building and productionizing analytical products and machine learning models
- Build reliable, scalable, maintainable, and cost-efficient systems across the stack
What You Need
- Minimum 2 years of experience with workflow orchestrators like Airflow, Dagster, or Prefect
- Minimum 2 years of experience building infrastructure on AWS, GCP, or Azure
- Knowledge of SQL and experience with Snowflake, Redshift, BigQuery, or similar databases
- Intuition around analytics and machine learning
- Strong Python programming skills
- Track record of building highly-reliable infrastructure for data storage and processing
Nice to Haves
- Expertise with AWS
- Expertise with the Modern Data Stack - dbt, Looker, Snowflake, and Fivetran
- Expertise with building and deploying machine learning systems.
- Experience with Terraform and Datadog
- The annual salary / OTE range for the target level for this role is $153,000-$180,000 + target equity + benefits (including medical, dental, vision, and 401(k)
Ramp Benefits (for U.S. based employees)
- 100% medical, dental & vision insurance coverage for you Partially covered for your dependents One Medical annual membership
- 401k (including employer match) Please note only 401k contributions made while employed by Ramp are eligible for an employer match
- Unlimited PTO
- Fertility HRA Up to $5,000 per year
- WFH stipend to support your home office needs
- Wellness stipend
- Parental Leave
- Relocation support
- Pet insurance
Job ID 5176177002
Related Jobs
Software Engineer | Data Platform
Ramp is building the next generation of finance tools-from corporate cards and expense management, to bill payments and accounting integrations-designed to save businesses time and money with every click.
Over 12,000 customers cut their expenses by 3.5% per year and close their books 8x faster by switching to the Ramp platform.
Founded in 2019, Ramp powers the fastest-growing corporate card and bill payment software in America and enables billions of dollars of purchases each year.
Ramp continues to grow quickly, more than doubling its revenue run rate in the first half of 2022.
Valued at $8.1 billion, Ramp's investors include Founders Fund, Stripe, Citi, Goldman Sachs, Coatue Management, D1 Capital Partners, Redpoint Ventures, General Catalyst, and Thrive Capital, as well as over 100 angel investors who were founders or executives of leadingpanies.
The Ramp teamprises talented leaders from leading financial services and fintechpanies-Stripe, Affirm, Goldman Sachs, American Express, Mastercard, Visa, Capital One-as well as technologypanies such as Meta, Uber, Netflix, Twitter, Dropbox, and Instacart.
Ramp was named Fastpany's #1 Most Innovativepany in North America in 2023 and #5 on LinkedIn Top Startups 2022.
About the Role
The Data Platform team develops and owns the systems that enable Ramp's reporting and strategic decision-making, as well as integrating machine learning models into our Risk systems and the product.
As a member of the Data Platform team, you'll build and maintain the infrastructure that enables Ramp to realize value from data.
You'll also partner with Ramp's analytics engineers, data scientists, and other data professionals to build internally and externally facing data products.
Our ideal candidate is excited about building systems for data collection, processing, storage, and retrieval, and is also passionate about making these systems observable, reliable, scalable, and highly automated.
What You'll Do
- Build and integrate theponents of Ramp's Analytics Platform and Machine Learning Platform
- Build tools that improve the agility and data experience of Ramp's Data Scientists, Analytics Engineers, Engineers, and Operations teams
- Build the batch and streaming data pipelines critical to Ramp's daily operations using Airflow, Snowflake, Materialize, and other data processing technologies
- Collaborate with stakeholder teams on building and productionizing analytical products and machine learning models
- Build reliable, scalable, maintainable, and cost-efficient systems across the stack
What You Need
- Minimum 2 years of experience with workflow orchestrators like Airflow, Dagster, or Prefect
- Minimum 2 years of experience building infrastructure on AWS, GCP, or Azure
- Knowledge of SQL and experience with Snowflake, Redshift, BigQuery, or similar databases
- Intuition around analytics and machine learning
- Strong Python programming skills
- Track record of building highly-reliable infrastructure for data storage and processing
Nice to Haves
- Expertise with AWS
- Expertise with the Modern Data Stack - dbt, Looker, Snowflake, and Fivetran
- Expertise with building and deploying machine learning systems.
- Experience with Terraform and Datadog
- The annual salary / OTE range for the target level for this role is $153,000-$180,000 + target equity + benefits (including medical, dental, vision, and 401(k)
Ramp Benefits (for U.S. based employees)
- 100% medical, dental & vision insurance coverage for you Partially covered for your dependents One Medical annual membership
- 401k (including employer match) Please note only 401k contributions made while employed by Ramp are eligible for an employer match
- Unlimited PTO
- Fertility HRA Up to $5,000 per year
- WFH stipend to support your home office needs
- Wellness stipend
- Parental Leave
- Relocation support
- Pet insurance
Job ID 5176177002
Data engineer
Our client is looking Data Engineer for Long Term project in Bloomfield CT, New York NY, Austin TX, Chicago IL (Initial Remote) below is the detailed requirements.
Job Title : Data Engineer
Duration : Long Term W2 Tax Term
Job description :
- Bachelor's degree in Computer science or equivalent, with minimum 9+ years of relevant experience .
- Must have experience with Pyspark, Python, Angular, SQL, Azure Databricks, Metadata.
- Knowledge of at least one component : Azure Data Factory, Azure Data Lake, Azure SQL DW, Azure SQL
- Expertise in ETL, API development, Microservices design and Cloud deployment solutions
- Experience in RESTful APIs using message formats such as JSON and XML
- Experience in integration technologies such as Kafka
- Experience in Python and frameworks such Flask or Django
- Experience in RDBMS and NoSQL databases
- Good understanding of SQL, T-SQL and / or PL / SQL
- Hands-on experience developing applications on AWS and / or Openshift
- Automation Skills using Infrastructure as Code
- Familiarity with creating web applications using AngularJS or React
- Familiarity with creating benchmark tests, designing for scalability and performance, and designing / integrating large-scale systems.
- Familiarity with building cloud native applications, knowledge on cloud tools such Kubernetes and Docker containers
- Demonstrate excellent communication skills including the ability to effectively communicate with internal and external customers.
- Ability to use strong industry knowledge to relate to customer needs and dissolve customer concerns and high level of focus and attention to detail.
- Strong work ethic with good time management with ability to work with diverse teams and lead meetings.
Data Engineer
Overview
LMI is a consultancy dedicated to powering a future-ready, high-performing government, drawing from expertise in digital and analytic solutions, logistics, and management advisory services.
We deliver integrated capabilities that incorporate emerging technologies and are tailored to customers’ unique mission needs, backed by objective research and data analysis.
Founded in 1961 to help the Department of Defense resolve complex logistics management challenges, LMI continues to enable growth and transformation, enhance operational readiness and resiliency, and ensure mission success for federal civilian and defense agencies.
This position is remote but may require travel to a client site in Washington, DC (Georgetown)*
Responsibilities
As a Data Engineer you will help develop and deploy technical solutions to solve our customers’ hardest problems, using various platforms to integrate data, transform insights, and build first-class applications for operational decisions.
You will leverage everything around you : core customer products, open source technologies (e.g. GHE), and anything you and your team can build to drive real impact.
In this role, you work with customers around the globe, where you gain rare insight into the world’s most important industries and institutions.
Each mission presents different challenges, from the regulatory environment to the nature of the data to the user population.
You will work to accommodate all aspects of an environment to drive real technical outcomes for our customers.
Core Responsibilities
- Setup transfers of data feeds from source systems into location accessible to Foundry and integrate with existing data utilizing enterprise architecture best practices
- Debug issues related to delayed or missing data feeds
- Monitor build progress and debug build problems in conjunction with deployment teams
- Using Foundry’s application development framework to design applications that address operational questions
- Rapid development and iteration cycles with SME’s including testing and troubleshooting application issues
- Executing requests for information (RFI’s) surrounding the platform’s data footprint
Qualifications
- Bachelor’s degree in data science, mathematics, statistics, economics, computer science, engineering, or a related business or quantitative discipline (Master’s degree preferred)
- Preferred : Interim or Active DoD Secret clearance.
- Strong engineering background, preferably in fields such as Computer Science, Mathematics, Software Engineering, Physics, or Data Science.
- Proficiency with programming languages such as Python (Pyspark, Pandas) SQL, R, JavaScript, or similar languages.
- Working knowledge of databases and SQL; preferred qualifications include linking analytic and data visualization products to database connections
- At least 9 years of experience in the field
- Ability to work effectively in teams of technical and non-technical individuals.
- Skill and comfort working in a rapidly changing environment with dynamic objectives and iteration with users.
- Demonstrated ability to continuously learn, work independently, and make decisions with minimal supervision.
- Proven track-record of strong customer communications including feedback gathering, execution updates, and troubleshooting.
LI-SH1
Options
Sorry the Share function is not working properly at this moment. Please refresh the page and try again later. Share on your newsfeed
LMI is an Equal Opportunity Employer. LMI is committed to the fair treatment of all and to our policy of providing applicants and employees with equal employment opportunities.
LMI recruits, hires, trains, and promotes people without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, pregnancy, disability, age, protected veteran status, citizenship status, genetic information, or any other characteristic protected by applicable federal, state, or local law.
If you are a person with a disability needing assistance with the application process, please contact
Need help finding the right job?
We can recommend jobs specifically for you!
Software Powered by iCIMS
Senior data engineer
At Fitch, we have an open culture where employees are able to exchange ideas and perspectives, throughout the organization, irrespective of their seniority.
Your voice will be heard allowing you to have a real impact. We embrace diversity and appreciate authenticity encouraging an environment where employees can be their true selves.
Our inclusive and progressive approach helps us to keep a balanced perspective. Fitch is also committed to supporting its employees by advancing conversations around diversity, equity and inclusion.
Fitch’s Employee Resource Groups (ERGs) have been established by employees who have joined together as a workplace community based on similar backgrounds or life experiences.
Fitch’s ERGs are available to connect employees with others within the organization to offer professional and personal support.
With our expertise, we are not only creating data and information, but also producing timely insights from every angle to influence decision making in this ever changing and highly competitive market.
We have a relentless hunger to innovate and unlock the power of human insights and to drive value for our customers. There has never been a better time to make an impact and we invite you to join us on this journey.
Fitch Ratings is a leading provider of credit ratings, commentary and research. Dedicated to providing value beyond the rating through independent and prospective credit opinions, Fitch Ratings offers global perspectives shaped by strong local market experience and credit market expertise.
The additional context, perspective and insights we provide have helped fund a century of growth and enables you to make important credit judgments with confidence.
At Fitch, we have an open culture where employees are able to exchange ideas and perspectives, throughout the organization, irrespective of their seniority.
Your voice will be heard allowing you to have a real impact. We embrace diversity and appreciate authenticity, employees work in an environment where they can be their true selves.
Our inclusive and progressive approach helps us to keep a balanced perspective.
With our expertise, we are not only creating data and information, but also producing timely insights from every angle to influence decision making in this everchanging and highly competitive market.
We have a relentless hunger to innovate and unlock the power of human insights and to drive value for our customers. There has never been a better time to make an impact and we invite you to join us on this journey.
Fitch is seeking a strong Data Engineer to improve critical data systems used widely by internal and external stakeholders.
The ideal candidate is someone who :
- Has 5+ years of data engineering experience developing large data pipelines
- Has Strong experience developing in Python.& Java
- Has Strong Experience with relational SQL and NoSQL databases
Roles & Responsibility
- Build data pipelines and applications to stream and process datasets at low latencies.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, NoSQL, Kafka using AWS Big Data technologies.
- Collaborate with Data Product Managers, Data Architects, and other Data Engineers to design, implement, and deliver successful data solutions.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
- Track data lineage, ensure data quality and improve discoverability of data.
- Work in Agile Environment (Scrum) and interact with multi-functional teams (Product Owners, Scrum Masters, Developers, Designers, Data Analysts)
Required Skills
- Strong experience developing in Python & Java.
- 5+ years of data engineering experience developing large data pipelines
- Strong SQL and NoSQL skills and ability to create queries to extract data and build performant datasets.
- Hands-on experience with message queuing and stream data processing (Kafka Streams).
Desirable Skills
- Experience with relational SQL and NoSQL databases, any RDBMS (Oracle, Postgres) and NoSQL (Cassandra, Mongo, or Redis, etc.).
- Hands-on experience with distributed systems such as Spark, Hadoop (HDFS, Hive, Presto, PySpark) to query and process data.
- Strong analytic skills related to working with unstructured datasets.
- Hands-on experience in using AWS cloud services : EC2, Lambda, S3, Athena, Glue, and EMR
- Redshift / Snowflake
- Experience in the Financial Services industry
Person specification
- Excellent problem solving and analytical skills
- Highly motivated to deliver results and meet deadlines
LI-CF1
DICE
Data Engineer
Posted byPython US RecruiterFourier has partnered with several World Leading Hedge Funds, Prop Traders, and Market Makers in a search for elite and eager Data Engineers to join them.
Our clients are looking for the best data engineers in the industry with a proven track record of delivering scalable and robust data systems and are driven by solving the seemingly unsolvable problems.
They are looking for individuals who are driven and motivated but most importantly - excited by Data!
Do you love working with Python and have experience managing ETL pipelines, building a scalable distributed data platform or deriving insights from alternative data sets?
Do you have an affinity for learning new technologies and always looking to broaden your existing technical knowledge?
These clients are at the pinnacle of finance, and therefore leadingpensation packages should be expected.
Primary Tech Stack :