About the Company:
Netomi is the leading agentic AI platform for enterprise customer experience. We work with the largest global brands like Delta Airlines, MetLife, MGM, United, and others to enable agentic automation at scale across the entire customer journey. Our no-code platform delivers the fastest time to market, lowest total cost of ownership, and simple, scalable management of AI agents for any CX use case. Backed by WndrCo, Y Combinator, and Index Ventures, we help enterprises drive efficiency, lower costs, and deliver higher quality customer experiences.
Want to be part of the AI revolution and transform how the world’s largest global brands do business? Join us!
Netomi is seeking a highly analytical and detail-oriented candidate to join the Analytics team in Gurugram. As part of the team, you will work with data science, product, engineering, and customer success teams to drive complex data and trend analyses to propose ways to improve and, thereby contributing to improve the experience. You will also be responsible for benchmarking and measuring the performance of various product operations projects, building and publishing detailed scorecards and reports, identifying and driving new opportunities based on customer and business data.
We are looking for an Engineer with a passion for using data to discover and solve real-world problems. You will enjoy working with rich data sets, modern business intelligence technology, and the ability to see your insights drive the features for our customers. You will also have the opportunity to contribute to the development of policies, processes, and tools to address product quality challenges in collaboration with teams.
Responsibilities:
Architect and implement clean, modular, and scalable backend services using Java, Spring Boot, and modern microservice principles.
Design efficient database schemas and write optimized queries for RDS (MySQL/PostgreSQL) and, optionally, NoSQL databases like Elasticsearch, MongoDB, or DynamoDB.
Integrate Kafka or RabbitMQ to build robust and loosely-coupled event-driven architectures.
Architect and implement scalable, secure, and reliable data pipelines using modern data platforms (e.g., Spark, Databricks, Airflow, Snowflake, etc.).
Develop ETL/ELT processes to ingest data from various structured and unstructured sources.
Perform Exploratory Data Analysis (EDA) to uncover trends, validate data integrity, and derive insights that inform data product development and business decisions.
Collaborate closely with data scientists, analysts, and software engineers to design data models that support high-quality analytics and real-time insights.
Profile and tune backend performance across databases, APIs, and infrastructure.
Write clean, maintainable code with comprehensive unit and integration tests to ensure reliability and stability.
Thrive in an agile, collaborative environment and take ownership of end-to-end feature delivery.
Requirements:
Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
4+ years of hands-on experience in data engineering or backend software development roles.
Strong expertise in Java and Spring Boot ecosystem.
Solid understanding of Relational Databases (RDS, MySQL, PostgreSQL).
Experience with Apache Kafka or RabbitMQ for building asynchronous, decoupled systems.
Proficiency with Python, SQL, and at least one data pipeline orchestration tool (e.g., Apache Airflow, Luigi, Prefect).
Strong experience with cloud-based data platforms (e.g., AWS Redshift, GCP BigQuery, Snowflake, Databricks).
Deep understanding of data modeling, data warehousing, and distributed systems.
Additional Skills:
Familiarity with DevOps practices (CI/CD, infrastructure as code, containerization with Docker/Kubernetes).
Exposure to AI/ML-integrated solutions or interest in working alongside data science teams.
Knowledge of data security and privacy regulations (e.g., GDPR, HIPAA).
Familiarity with prompt engineering and how LLM-based systems interact with data.