Sorry, the offer is not available,
but you can perform a new search or explore similar offers:

Accounts Officer / Assistant Accountant In Dehil

Responsibilities:Handle full set of accounts of a company with two branchesResponsible for general accounting dutiesVerify and check staff reimbursementHandl...


From Miki Travel (Hong Kong) Limited - New Delhi

Published a month ago

Area Sales Manager

position:Area Sales Manager- Financial InclusionRelevant Industry: Payments, BFSI, Financial Inclusion, FintechQualification:GraduateBudget:5 to 8LPALocation...


New Delhi

Published a month ago

Education Counsellor / Student Advisor

As an Education Counselor, you'll guide students interested in studying MBBS abroad. Your roleis to provide information, support, and assistance throughout t...


From Ready2Help - New Delhi

Published a month ago

Embedded Firmware Engineer

Develop codes for utilising various hardware and capabilities built into a microcontroller, such as ADCs,timers, RTCs, power-saving and sleep modes, etc.Part...


From Growing Careers - New Delhi

Published a month ago

Big Data Architect - Kafka

Big Data Architect - Kafka
Company:

Srijan Technologies


Details of the offer

Role:

A Big Data Architect with Kafka (primary focus) and Hadoop skill sets to work on an exciting Streaming / Data Engineering team (7 years of total experience)

Responsibilities include:

Responsible for technical design and implementation in the areas of: big data engineering mainly Kafka
Develop scalable and reliable data solutions to move data across systems from multiple sources in real time as well as batch modes (Kafka)
Build Producer and Consumer applications on Kafka, and appropriate Kafka configurations
Designing, writing, and operationalizing new Kafka Connectors using the framework
Accelerate adoption of the Kafka ecosystem by creating a framework for leveraging technologies such as Kafka Connect, KStreams/KSQL, Schema Registry, and other streaming-oriented technology

Implement Stream processing using Kafka Streams / KSQL / Spark Jobs along with Kafka

Develop both deployment architecture and scripts for automated system deployment in an On-Premise as well as Cloud (AWS)
Bring forward ideas to experiment and work in teams to transform ideas to reality
Architect data structures that meet the reporting timelines
Work directly with engineering teams for design and build their development requirements
Maintain high standards of software quality by establishing good practices and habits within the development team while delivering solutions on time and on budget.
Facilitate the agile development process through daily scrum, sprint planning, sprint demo, and retrospective meetings.
Participate in peer-reviews of solution designs and related code
Analyze and resolve technical and application problems
Proven communication skills, both written and oral
Demonstrated ability to quickly learn new tools and paradigms to deploy cutting edge solutions
Create large scale deployments using newly conceptualized methodologies

Skills:

Proven hands-on experience with Kafka is a must.

Proven hands-on experience with Hadoop stack (HDFS, Map Reduce, Spark).
Core development experience in one or more of these languages: Java, Python / PySpark, Scala etc.

Good experience in in developing Producers and Consumers for Kafka as well as custom Connectors for Kafka

3 plus years of developing applications using Kafka (Architecture), Kafka Producer and Consumer APIs, Real-time Data pipelines/Streaming

2 plus years of experience performing Configuration and fine-tuning of Kafka for optimal production performance

Experience in using Kafka APIs to build producer and consumer applications, along with expertise in implementing KStreams components. Have developed KStreams pipelines, as well as deployed KStreams clusters
Strong knowledge of the Kafka Connect framework, with experience using several connector types: HTTP REST proxy, JMS, File, SFTP, JDBC, Splunk, Salesforce, and how to support wire-format translations. Knowledge of connectors available from Confluent and the community
Experience with developing KSQL queries and best practices of using KSQL vs KStreams will be an added advantage
Deep understanding of different messaging paradigms (pub/sub, queuing), as well as delivery models, quality-of-service, and fault-tolerance architectures
Expertise with Hadoop ecosystem, primarily Spark, Kafka, Nifi etc.
Experience with integration of data from multiple data sources
Experience with stream-processing systems: Storm, Spark-Streaming, etc. will be ad advantage
Experience with relational SQL and NoSQL databases, one or more of DBs like Postgres, Cassandra, HBase, Cassandra, MongoDB etc.
Experience with AWS cloud services like S3, EC2, EMR, RDS, Redshift will be an added advantage
Excellent in Data structures & algorithms and good in analytical skills
Strong communication skills
Ability to work with and collaborate across the team
A good "can do" attitude.


Source: Timesjobs

Requirements

Big Data Architect - Kafka
Company:

Srijan Technologies


Built at: 2024-03-28T18:25:08.406Z