Sorry, the offer is not available,
but you can perform a new search or explore similar offers:

Java full stack developer/lead - spring mvc/mysql

About Client The client is one of the world's largest Insurance Broking and Risk Management firm. Headquartered in NY, the company has more than 30,00...

From Avi Consulting Llp - Karnataka

Published a month ago

Cyanous software - full stack developer - .net core/c#

Overview : - A Full-stack SW developer (for cloud application development) using Microsoft technologies. - Good Understanding of Azure PAAS components...

From Cyanous Software Private Limited - Karnataka

Published a month ago

Node.js developer

What You'll Do : Define code architecture decisions to support a high-performance and scalable product Writing reusable, testable, scalable and efficient code...

From Sugarbox - Karnataka

Published a month ago

Senior java developer - spring/hibernate frameworks

- 3years of experience in software development using Java and J2ee, related technologies. - Working experience or familiarity with Spring Boot, REST APIs, ORM...

From Global Technologies - Karnataka

Published a month ago

Big Data Lead

Big Data Lead

Srijan Technologies Pvt Ltd


Big Data Lead


Details of the offer

Srijan Technologies is a 19-year-old technology services firm.
For a large part of its life, Srijan has specialised in building content management systems with expertise in PHP-based open-source CMS’, specifically Drupal. In recent years Srijan has diversified into i) Data Engineering using NodeJS and Python, ii) Data Science -- Analytics and Machine Learning and iii) API Management using APIGEE.
Srijan is approx 400 people. Srijan’s development offices in India are located in New Delhi, Gurugram, Goa, Bangalore and Mumbai, Delhi, Gurgaon, and Goa are the largest offices. In addition, a few developers & delivery leads are located in several countries globally -- USA (New York, Charlotte), Singapore, Philippines (Manila), Australia (Sydney, Brisbane, Melbourne), Germany (Berlin), Japan (Tokyo). In each of these countries, Srijan has a functional legal subsidiary.
Srijan works largely with enterprises or mid-large sized global firms and focuses on recurring business from these accounts, thereby bringing much-needed predictability of revenue for high-growth companies. It works with several top brands at the moment.
The firm is beginning to invest in startups and in joint research projects with top institutes. For instance, it recently partnered with IIT-Delhi to invest in an 18-month project for building a solution for ‘Honey traceability using Blockchain’.
The leadership team at Srijan has set itself an audacious goal of reaching $25 million in revenue (while maintaining healthy EBITDA margins) in FY 2021 -- doubling our revenues. This requires significant technology and delivery leadership bandwidth to be created in the firm to ensure our high-quality standards are not compromised.
Each year Srijan donates 7% of its profits to Srijan Foundation Trust a registered non-profit which runs several projects including non-formal schools (directly or via partner organizations) and Indic civilizational projects such as #SrijanTalks. 
A Big Data Lead with Kafka (primary focus) and Hadoop skill sets to work on an exciting Streaming / Data Engineering team.
Tech Lead -Experience of 4-6 years should be fine, predominantly on Kafka (AWS experience will be mandatory)

Responsibilities include:
Develop scalable and reliable data solutions to move data across systems from multiple sources in real time as well as batch modes (Kafka)
Build Producer and Consumer applications on Kafka, and appropriate Kafka configurations
Designing, writing, and operationalizing new Kafka Connectors using the framework
Accelerate adoption of the Kafka ecosystem by creating a framework for leveraging technologies such as Kafka Connect, KStreams/KSQL, Schema Registry, and other streaming-oriented technology
Implement Stream processing using Kafka Streams / KSQL / Spark Jobs along with Kafka
Bring forward ideas to experiment and work in teams to transform ideas to reality
Architect data structures that meet the reporting timelines
Work directly with engineering teams for design and build their development requirements
Maintain high standards of software quality by establishing good practices and habits within the development team while delivering solutions on time and on budget.
Proven communication skills, both written and oral
Demonstrated ability to quickly learn new tools and paradigms to deploy cutting edge solutions
Create large scale deployments using newly conceptualized methodologies

Proven hands-on experience with Kafka is a must
Proven hands-on experience with Hadoop stack (HDFS, Map Reduce, Spark)
Core development experience in one or more of these languages: Java, Python / PySpark, Scala etc. 
Good experience in in developing Producers and Consumers for Kafka as well as custom Connectors for Kafka
2+ plus years of developing applications using Kafka (Architecture), Kafka Producer and Consumer APIs, Real-time Data pipelines/Streaming
2 plus years of experience performing Configuration and fine-tuning of Kafka for optimal production performance
Experience in using Kafka APIs to build producer and consumer applications, along with expertise in implementing KStreams components. Have developed KStreams pipelines, as well as deployed KStreams clusters
Strong knowledge of the Kafka Connect framework, with experience using several connector types: HTTP REST proxy, JMS, File, SFTP, JDBC, Splunk, Salesforce, and how to support wire-format translations. Knowledge of connectors available from Confluent and the community
Experience with developing SQL queries and best practices of using KSQL vs KStreams will be an added advantage
Expertise with Hadoop ecosystem, primarily Spark, Kafka, Nifi etc.
Experience with integration of data from multiple data sources
Experience with stream-processing systems: Storm, Spark-Streaming, etc. will be ad advantage
Experience with relational SQL and NoSQL databases, one or more of DBs like Postgres, Cassandra, HBase, Cassandra, MongoDB etc.
Experience with AWS cloud services like S3, EC2, EMR, RDS, Redshift will be an added advantage
Excellent in Data structures & algorithms and good in analytical skills
Strong communication skills
Ability to work with and collaborate across the team
A good "can do" attitude