about_us

Careers

Unlock Your Potential in Data Excellence

Ignite your career in data excellence with Ignate. Join us at the intersection of innovation and insight, where your journey to professional growth begins.

  • Senior Spark Data Developer

Senior Spark Data Developer

Responsibilities:

⦿ Design, develop, and maintain robust data processing pipelines using Apache Spark.
⦿ Implement efficient algorithms and data structures to optimize performance and scalability.
⦿ Collaborate with data engineers and data scientists to integrate machine learning models into production pipelines.
⦿ Troubleshoot and debug issues in existing codebase and provide timely resolutions.
⦿ Conduct code reviews and provide constructive feedback to team members.
⦿ Stay up to date with the latest advancements in big data technologies and incorporate them into the development process.
⦿ Mentor junior developers and contribute to their professional growth.

Qualification:

⦿ Bachelor's or master's degree in computer science, Engineering, or related field.
⦿ Proven experience (5+ years) developing data processing applications with Scala and PySpark.
⦿ Strong proficiency in functional programming paradigms, particularly with Cats library.
⦿ Proficiency in Rust programming language is highly desirable.
⦿ Experience designing and optimizing distributed systems for large-scale data processing.
⦿ Basic knowledge about streaming services.
⦿ Familiarity with design pattern principles.

  • Machine Learning (ML) Engineer

Machine Learning (ML) Engineer

Responsibilities:

⦿ Design, develop, and implement machine learning algorithms and models that address complex business problems and enhance data-driven decision-making processes.
⦿ Perform comprehensive data analysis, cleansing, preprocessing, and feature engineering to ensure the quality and relevance of input data for optimal model performance.
⦿ Train, validate, and fine-tune machine learning models using appropriate techniques and methodologies, ensuring accurate predictions and optimal model generalization.
⦿ Identify and select relevant features to improve model efficiency, interpretability, and predictive accuracy, considering domain-specific insights.
⦿ Continuously refine and optimize machine learning models for improved performance, scalability, and efficiency, taking into account real-world constraints and considerations.

Qualification:

⦿ Bachelor’s degree in computer science or a related field
⦿ 5+ years of experience in database support.
⦿ Solid understanding of machine learning algorithms, NLP techniques, and frameworks such as TensorFlow, PyTorch, spaCy, transformers, scikit-learn.
⦿ Proficiency in programming languages such as Python and with relevant libraries for data manipulation and analysis (e.g., NLTK, Pandas, NumPy).
⦿ Experience with pre-trained language models such as Llama, GPT, BARD, BERT, BARD, and Molly that leverage transfer learning.
⦿ Knowledge of semantic analysis, sentiment analysis, topic modeling, and other NLP tasks.

  • MEARN Stack Software Engineer (MongoDB, Express, Angular, React, Node)

MEARN

Responsibilities:

⦿ Design, develop, and maintain MEARN stack web applications.
⦿ Implement software solutions meeting business requirements and coding standards.
⦿ Collaborate with a team to add features, fix bugs, and enhance performance.
⦿ Troubleshoot and debug application issues, ensuring timely resolutions.
⦿ Perform unit testing for quality assurance.

Qualification:

⦿ Bachelor's degree in Computer Science or related field.
⦿ 2+ years of experience in MEARN stack web application development.
⦿ Proficiency in JavaScript and TypeScript.
⦿ Strong front-end skills with HTML5, CSS3, and responsive design.
⦿ Familiarity with databases like MySQL and MongoDB.

  • Data Engineer

Data Engineer

Responsibilities:

⦿ Design and implement scalable data pipelines for ETL processes.
⦿ Develop data warehouses for efficient data storage.
⦿ Analyze data to identify patterns and solutions.
⦿ Troubleshoot and resolve application issues.
⦿ Ensure compliance with data governance and privacy regulations.

Qualification:

⦿ Bachelor's degree in Computer Science or related field.
⦿ 2+ years of experience with data engineering tools like Hadoop, Spark, Python, Java, and Scala.
⦿ Proficiency in ETL pipeline development.
⦿ Familiarity with data integration concepts.
⦿ Hands-on experience with databases like MySQL, PostgreSQL, and MongoDB.

about_us

Submit Your Resume

Or

You can send your resume to [email protected]