Profile Description -
Why work with us ?
- Engineering team consisting of past startup founders, IIT alumni and serial hackathon winners
- High standard of engineering quality and opportunity to work on a cutting edge tech stack
- Solve unique scalability challenges
- Learn how the India lending system works on the inside
- High impact role at fast paced growth company
As a Data Engineer you will -
- Create and maintain optimal data pipeline architecture.
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work with data and analytics experts to strive for greater functionality in our data systems.
Qualifications for a Data Engineer -
- 3+ years of experience in a Data Engineer role
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
- Experience building and optimizing big data, data pipelines, architectures and data sets
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
- Strong analytic skills related to working with unstructured datasets
- Build processes supporting data transformation, data structures, metadata, dependency and workload management
- A successful history of manipulating, processing and extracting value from large disconnected datasets
- Working knowledge of message queuing, stream processing, and highly scalable , big data , data stores
- Experience with big data tools: Hadoop, Spark, Kafka, etc
- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra
- Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
- Experience with AWS cloud services: EC2, EMR, RDS, Redshift, Kinesis
- Experience with stream-processing systems: Storm, Spark-Streaming, etc
- Experience with object-oriented / object function scripting languages: Python, Java, C++, Scala, etc