top of page

Your Next Career Starts Here

A fast-paced, supportive environment where your work matters. We give you the tools, the trust, and the freedom to do your best work.

Explore opportunities

System Software Engineer 

Hyderabad,India/Remote

We are at the forefront of revolutionizing data processing for traditional analytics and cutting-edge GenAI preprocessing. We are building an innovative data processing engine that is transforming how Apache Spark, Apache Flink, Ray and others operate on diverse, large-scale data. Our team of engineers drive and adopt advances in hardware-accelerated computing, parallel processing of large-scale data, query optimization, distributed systems, compilers, machine learning, and cloud-native computing. We are looking for specialists to join our engineering team and shape the future of accelerated data processing. The Opportunity: As a System Software Engineer, you will be a key individual contributor in advancing operating system level capabilities underlying our data processing engine. You will enhance functional breadth, performance, scale, and reliability of these OS-level services on diverse accelerated hardware platforms. This is a unique opportunity to make a significant impact on a category-defining product and work with a talented team of engineers. What You'll Do: Architect: Influence the architecture of how our data processing engine harnesses and manages the use of compute, memory, storage, and network resources. Design: Lead design of functional and performance enhancements to OS-level capabilities underlying our data processing engine such as concurrent event processing, memory management, and inter-process communication. Core Development: Individually design, implement, test, optimize, and maintain components of the data processing engine. Innovation and Differentiation: Analyze advances in system services provided by platforms and programming models for high-concurrency data processing on CPUs and GPUs (e.g. Rust, CUDA runtime) and identify opportunities for our engine to enhance technology and product leadership. Collaboration: Partner effectively with the execution engine engineering team in enhancing system software capabilities. Continuous Improvement: Foster best practices in design and code reviews, testing, CI/CD, and issue resolution to maintain highest product quality, security, efficiency, & productivity. What You'll Bring: Bachelor's degree in Computer Science or a related field with 5+ years of relevant experience OR a Master's degree in Computer Science or a related field with 3+ years of relevant experience. 3+ years of deep technical experience in developing and delivering OS-level services such as task scheduling, memory management, interprocess communication, asynchronous event processing for production software or hardware platforms. Demonstrated experience in troubleshooting and resolving functional and performance anomalies in both pre and post-production scenarios. Strong knowledge of operating system internals and computer architecture. Exceptional programming skills in C, C++, and Rust. Extensive development experience in Linux environments. Strong analytical and problem-solving skills with a passion for performance optimization. Location: Hyderabad, India or Remote

Principle/Senior Cloud Engineer Kubernetes

Hyderabad/Pune - On Site

We are a new-age AI-First Digital and Cloud Engineering services company that drives Agility and Relevance for our client’s success. Powered by cutting-edge technology solutions that enable new business models and revenue streams, we help our clients achieve their trajectory of growth. Agility is a core muscle, an integral part of the fabric of a modern enterprise. To succeed in an ever-changing business environment, every modern organization needs to adapt and renew itself quickly. We help foster a more agile approach to business to reconfigure strategy, structure, and processes to achieve more growth and drive greater efficiencies. Relevance is timeless and is the only way to survive and thrive. The quest for relevance defines the exponential acceleration of humanity. This has presented us with a slew of opportunities, but also many unprecedented challenges. With technology-led innovation, we help our customers harness these opportunities and address myriad challenges. Key Responsibilities: Understand system architecture and design specifications to translate them into efficient Kubernetes-based deployments. Design, implement, and maintain Kubernetes Operators with Custom Resource Definitions (CRDs) to automate lifecycle management of distributed applications and services. Manage and optimize Kubernetes clusters — including rolling upgrades, backups, restore processes, and automated failover mechanisms. Implement scaling strategies (horizontal and vertical autoscaling) to ensure platform resilience and optimal performance. Develop monitoring, alerting, and logging frameworks for Kubernetes workloads using tools like Prometheus, Grafana, or ELK stack. Manage persistent storage volumes, ensuring data consistency and high availability across pods and clusters. Collaborate with software engineering and DevOps teams to define and implement best practices for container management, deployment pipelines, and infrastructure automation. Contribute to infrastructure-as-code (IaC) and CI/CD automation initiatives for Kubernetes environments. Continuously evaluate and integrate new CNCF ecosystem tools to enhance performance, scalability, and security of our Kubernetes platform. Requirements: Must-Have Skills Strong hands-on experience with Kubernetes (K8s), Docker, and Kubernetes Operator development.(4+ YOE) Proficiency in Golang (mandatory) with the ability to design, develop, and maintain cloud-native components. Solid understanding of Linux systems, shell scripting, and system-level troubleshooting. Proven experience in cloud computing environments (AWS, GCP, Azure, or private cloud). Knowledge of distributed systems concepts (consistency, replication, failover, leader election, etc.). Experience implementing monitoring and alerting systems for Kubernetes workloads. Nice to Have: Experience with Kafka, gRPC, or other event-driven communication frameworks. Familiarity with Zookeeper, ETCD, or Consul for distributed coordination and service discovery. Understanding of multi-threaded programming, concurrency models, and OS-level performance tuning. Exposure to Agile software development practices and cross-functional collaboration. Experience contributing to or building tools in the CNCF ecosystem (Helm, Prometheus, ArgoCD, etc.).

Sr. Compiler Engineer - US

Mountain View, CA

We are at the forefront of revolutionizing data processing for traditional analytics and cutting-edge GenAI preprocessing. We are building an innovative data processing engine that is transforming how Apache Spark, Apache Flink, Ray and others operate on diverse, large-scale data. Our team of engineers drive and adopt advances in hardware-accelerated computing, parallel processing of large-scale data, query optimization, distributed systems, compilers, machine learning, and cloud-native computing. We are looking for specialists to join our engineering team and shape the future of accelerated data processing. The Opportunity: As a Senior Compiler Engineer, you will lead the advancement of compilers for our data processing engine. You will enhance functional breadth, performance, scale, and reliability of engine’s compilers in executing data processing workloads on diverse acceleration hardware infrastructure, including CPUs, GPUs. And FPGAs. This is a unique opportunity to make a significant impact on a category-defining product and work with a talented team of engineers. What You'll Do: Architect: Lead the architecture of how our data processing engine will translate data processing logical and physical plans into efficient code for execution on CPU/GPU/FPGA. Design: Lead design of functional and performance enhancements to the compilers in our data processing engine. Core Development: Individually design, implement, test, optimize, and maintain components of compilers for the data processing engine. Innovation and Differentiation: Analyze advances in the capabilities of CPU and GPU processing elements from Nvidia, AMD, Intel, and others, compiler platforms and tools and identify opportunities for our engine to enhance technology and product leadership. Collaboration: Partner effectively with engineering and product management in defining language specifications of our product and ensuring compliance with industry standards,platform definitions, and workload requirements. Continuous Improvement: Foster best practices in design and code reviews, testing, CI/CD, and issue resolution to maintain highest product quality, security, efficiency, & productivity. What You'll Bring: Bachelor's degree in Computer Science or a related field with 7+ years of relevant experience OR a Master's degree in Computer Science or a related field with 5+ years of relevant experience. 5+ years of deep technical experience in developing and enhancing production-quality compilers, tools, or related software. Demonstrated experience with auto-vectorization compiler technologies and data-parallel architectures. Demonstrated experience with compiler internals. Solid experience with GCC/LLVM/MLIR. Demonstrated experience with parsing, IR, type systems, and static analysis. Solid experience with flex/yacc/ANTLR. Experience with compilation for data applications (e.g., query compilers, query planners), data processing languages (e.g., SQL, Python), acceleration hardware (e.g., Nvidia GPU), and data processing engines (e.g., Apache Spark, Presto) preferred. Exceptional programming skills in C, C++. Rust experience preferred. Extensive development experience in Linux environments. Strong analytical and problem-solving skills with a passion for performance optimization. Location Considerations: We value face-to-face collaboration, but recognize that talent can be found anywhere. Our engineering team works at our headquarters in Mountain View, CA, at our India office in Hyderabad, and at remote locations. This specific position will be based at our headquarters in Mountain View, CA.

Data Processing Engineer - US

Mountain View, CA/Remote

We are at the forefront of revolutionizing data processing for traditional analytics and cutting-edge GenAI preprocessing. We are building an innovative data processing engine that is transforming how Apache Spark, Apache Flink, Ray and others operate on diverse, large-scale data. Our team of engineers drive and adopt advances in hardware-accelerated computing, parallel processing of large-scale data, query optimization, distributed systems, compilers, machine learning, and cloud-native computing. We are looking for specialists to join our engineering team and shape the future of accelerated data processing. The Opportunity: As a Data Processing Engineer - I/O, you will be a key individual contributor in advancing data read and write capabilities of our data processing engine. You will enhance functional breadth, performance, scale, and reliability of the engine in reading and writing large scale data of various data types from diverse data sources and data sinks. This is a unique opportunity to make a significant impact on a category-defining product and work with a talented team of engineers. What You'll Do: Architect: Influence the architecture of how our data processing engine interfaces with data sources and sinks, catalogs, data formats. Design: Lead design of functional and performance enhancements to adapters/connectors, data representations, data filtering, caching and more in our data processing engine. Core Development: Individually design, implement, test, optimize, and maintain components of the data processing engine. Innovation and Differentiation: Analyze technology roadmap of existing and emerging data formats and libraries, open table formats, catalog services, and more (e.g., Apache Arrow,Apache Parquet, Apache Iceberg) and identify opportunities for our engine to enhance technology and product leadership. Collaboration: Partner effectively with engineering and product management in defining and realizing the data I/O roadmap of our product. Continuous Improvement: Foster best practices in design and code reviews, testing, CI/CD, and issue resolution to maintain highest product quality, security, efficiency, & productivity. What You'll Bring: Bachelor's degree in Computer Science or a related field with 7+ years of relevant experience OR a Master's degree in Computer Science or a related field with 5+ years of relevant experience. 3+ years of deep technical experience in developing and optimizing data read and write interfaces for large-scale data processing, particularly related to Apache Parquet, Apache ORC, Apache Iceberg, Apache Spark, and similar technologies. Demonstrated experience in instrumenting, analyzing, and optimizing the performance of data processing engine components on benchmark and customer workloads. Demonstrated experience in the design, development, and successful release of high performance data processing engine features for large production deployments. Good knowledge of the architecture of one or more of Apache Spark, Apache Flink, Presto/ Trino. Exceptional programming skills in C, C++. Rust experience preferred. Extensive development experience in Linux environments. Strong analytical and problem-solving skills with a passion for performance optimization. Location Considerations: Mountain View, CA or Remote

Data Processing Engineer - India

Hyderabad,India/Remote

We are at the forefront of revolutionizing data processing for traditional analytics and cutting-edge GenAI preprocessing. We are building an innovative data processing engine that is transforming how Apache Spark, Apache Flink, Ray and others operate on diverse, large-scale data. Our team of engineers drive and adopt advances in hardware-accelerated computing, parallel processing of large-scale data, query optimization, distributed systems, compilers, machine learning, and cloud-native computing. We are looking for specialists to join our engineering team and shape the future of accelerated data processing. The Opportunity: As a Data Processing Engineer - I/O, you will be a key individual contributor in advancing data read and write capabilities of our data processing engine. You will enhance functional breadth, performance, scale, and reliability of the engine in reading and writing large scale data of various data types from diverse data sources and data sinks. This is a unique opportunity to make a significant impact on a category-defining product and work with a talented team of engineers. What You'll Do: Architect: Influence the architecture of how our data processing engine interfaces with data sources and sinks, catalogs, data formats. Design: Lead design of functional and performance enhancements to adapters/connectors, data representations, data filtering, caching and more in our data processing engine. Core Development: Individually design, implement, test, optimize, and maintain components of the data processing engine. Innovation and Differentiation: Analyze technology roadmap of existing and emerging data formats and libraries, open table formats, catalog services, and more (e.g., Apache Arrow,Apache Parquet, Apache Iceberg) and identify opportunities for our engine to enhance technology and product leadership. Collaboration: Partner effectively with engineering and product management in defining and realizing the data I/O roadmap of our product. Continuous Improvement: Foster best practices in design and code reviews, testing, CI/CD, and issue resolution to maintain highest product quality, security, efficiency, & productivity. What You'll Bring: Bachelor's degree in Computer Science or a related field with 7+ years of relevant experience OR a Master's degree in Computer Science or a related field with 5+ years of relevant experience. 3+ years of deep technical experience in developing and optimizing data read and write interfaces for large-scale data processing, particularly related to Apache Parquet, Apache ORC, Apache Iceberg, Apache Spark, and similar technologies. Demonstrated experience in instrumenting, analyzing, and optimizing the performance of data processing engine components on benchmark and customer workloads. Demonstrated experience in the design, development, and successful release of high performance data processing engine features for large production deployments. Good knowledge of the architecture of one or more of Apache Spark, Apache Flink, Presto/ Trino. Exceptional programming skills in C, C++. Rust experience preferred. Extensive development experience in Linux environments. Strong analytical and problem-solving skills with a passion for performance optimization. Location Considerations: Mountain View, CA or Remote

Cloud Platform Software Engineer - India

Hyderabad,India/Remote

We are at the forefront of revolutionizing data processing for traditional analytics and cutting-edge GenAI preprocessing. We are building an innovative data processing engine that is transforming how Apache Spark, Apache Flink, Ray and others operate on diverse, large-scale data. Our team of engineers drive and adopt advances in hardware-accelerated computing, parallel processing of large-scale data, query optimization, distributed systems, compilers, machine learning, and cloud-native computing. We are looking for specialists to join our engineering team and shape the future of accelerated data processing. The Opportunity: As a Cloud Platform Software Engineer, you will be a key individual contributor in advancing the SaaS platform for automating the multi-cloud deployment and operation of data processing engine. You will enhance functional breadth, performance, scale, and reliability of the SaaS platform on multiple clouds. This is a unique opportunity to make a significant impact on a category-defining product and work with a talented team of engineers. What You'll Do: • Architect: Influence the architecture of how our data processing engine is deployed and operated as an enterprise-class service in the customer’s preferred cloud environment. • Design: Lead design of functional and performance enhancements to the automation capabilities of the SaaS platform on different clouds. • Core Development: Individually design, implement, test, optimize, and maintain components of the SaaS platform. • Collaboration: Partner effectively with the data processing engineering team and product management in implementing SaaS platform capabilities. • Continuous Improvement: Foster best practices in design and code reviews, testing, CI/CD and issue resolution to maintain highest product quality, security, efficiency, & productivity. What You'll Bring: • Bachelor's degree in Computer Science or a related field with 5+ years of relevant experience OR a Master's degree in Computer Science or a related field with 3+ years of relevant experience. • 3+ years of technical experience in developing microservice-based, multi-tenant, cloud native application for AWS, GCP, or Azure. • 3+ years of technical experience developing and running infrastructure-as-code for production applications. • Demonstrated experience in AWS, GCP, or Azure core services, API, permissions model. • Strong fundamentals in building scalable, highly available, secure implementations of distributed, multi-tier, always-on applications. • Strong programming skills in Golang. • Strong development experience in Linux environments. • Strong analytical and problem-solving skills with a passion for performance optimization. Location Considerations: Hyderabad, India or Remote

Senior Frontend Developer - India

Pune, India - Hybrid

We empower Go-To-Market teams to ascend to new heights in their sales performance, unlocking boundless opportunities for growth. We're passionate about helping sales teams excel beyond expectations. Our pride lies in assembling an unparalleled team and crafting a crucial solution that becomes an indispensable tool for our users. With AI, sales excellence becomes an attainable reality. We are looking for a Senior React developer to join our dynamic team! As a Senior React developer, you will be responsible for building modules using React and associated tooling (Redux, React-query, Indexed-DB, Material-UI). Team management and Mentoring experience is required. Responsibilities: Architect and implement end-to-end features, collaborating closely with product, design, and backend teams to deliver high-impact user experiences. Build and scale responsive SaaS applications with modular, maintainable front-end architecture. Develop and maintain core front-end infrastructure, including design systems, shared libraries, and reusable components. Implement UI and UX enhancements using modern React (18+) patterns, ensuring a consistent and accessible user experience. Optimize applications for scalability, performance, and browser storage efficiency. Maintain high standards of code quality through rigorous code reviews, unit/integration testing, and adherence to front-end best practices. Take ownership of product features and contribute to release planning, mentoring team members, and ensuring application stability (monitoring, logging, debugging). Must-Have Skills: 6+ years of production experience with React (React 18+ preferred) and modern frontend tooling. 4+ years of experience with TypeScript, including advanced features like generics, types, and async programming. Strong experience with the TanStack ecosystem (Query, Table, Virtual) and browser storage (localStorage, sessionStorage, IndexedDB). Hands-on experience building responsive and accessible SaaS applications. Experience with component and integration testing using Vitest or Jest, along with React Testing Library. Understanding of performance testing and optimization techniques in frontend applications. Familiarity with React-specific developer tools for profiling, debugging, and performance monitoring. Solid understanding of state management approaches (Redux, Context API, TanStack Query). Familiarity with RESTful APIs, asynchronous data handling, and third-party integrations. Excellent communication skills and a strong sense of ownership and accountability. Ability to work independently and adapt in a fast-paced environment. Good-to-Have Skills: Experience with Tailwind CSS and Radix UI. Familiarity with component libraries and documentation tools like Storybook. Experience with monorepo and workspace tools such as Turborepo or Nx. Familiarity with modern E2E testing tools (e.g., Playwright) Qualifications: Bachelor’s degree in Computer Science or equivalent experience.

Lead Full Stack Engineer - India

Hyderabad, India - Onsite

We provide full stack IoT traceability solution using custom smart labels and ultra-low power devices. We use cutting-edge technologies to enable end to end supply chain digitization. We at the forefront of revolutionizing supply chain, warehouse, and inventory management solutions by providing real-time visibility into assets and shipments. Our dedicated team collaborates closely with the Product team to architect and uphold cutting-edge technologies that power our core platform, customer-facing API’s, and real-time events processing tailored specifically for the challenges in the supply chain industry. We tackle compelling technical hurdles, working with data from our fleet of IoT to provide real-time visibility. About the role: We are seeking a Lead Full Stack Software Engineer to join our team developing and maintaining a sophisticated logistics and tracking platform. You'll work on a complex, multi-tenant Rails application that handles real-time shipment tracking, IoT device management, alert processing, and comprehensive reporting systems. Responsibilities: Collaborate with the Product team to design, develop, and maintain robust solutions in Ruby on Rails Implement scalable, high-performance systems for real-time events processing in the supply chain, warehouse, and inventory management domains Contribute to the ongoing improvement and optimization of our core platform and customer-facing API’s Work on diverse technical challenges, leveraging your expertise in Ruby on Rails to enhance real-time visibility and data processing from our IoT fleet Actively participate in grassroots innovation and contribute to decentralized decision-making within the engineering team Foster a data-centric mindset, ensuring that exceptional ideas are welcomed and considered, regardless of the source Requirements: 7+ years of experience in developing web applications using Ruby on Rails, HTML, CSS, JavaScript and/or similar technologies (MEAN/MERN/MEVN or Python for Backend with JS at Frontend or Any other Full stack tech stack will also be considered) Note: Person should be ready learn and move to move to RoR+JavaScript stack You have strong knowledge of software development fundamentals, including relevant background in computer science fundamentals, distributed systems, data storage, and agile development methodologies. You are pragmatic and combine a strong understanding of technology and product needs to arrive at the best solution for a given problem. You are highly entrepreneurial and thrive in taking ownership of your own impact. Guiding and mentoring the junior team members. Key Technologies & Stack Backend: Ruby on Rails (mountable engines), PostgreSQL, TimescaleDB Background Processing: Resque, ActiveJob Frontend: JavaScript (ES6+), TailwindCSS, Turbo Streams Infrastructure: Redis, AWS services, Docker APIs: RESTful APIs, webhook integrations, third-party service integrations Testing: RSpec, Playwright for E2E testing Knowledge of geolocation and mapping services Familiarity with time-series databases (TimescaleDB preferred) Experience with multi-tenant application design Nice To Haves You have worked with Apache Kafka or similar service You have worked with Redis, Docker, Hotwire (Tubro + StimulusJS) Experience designing and developing products in supply chain domains

Senior Full Stack Engineer - India

Hyderabad, India - Onsite

We provide full stack IoT traceability solution using custom smart labels and ultra-low power devices. We use cutting-edge technologies to enable end to end supply chain digitization. We at the forefront of revolutionizing supply chain, warehouse, and inventory management solutions by providing real-time visibility into assets and shipments. Our dedicated team collaborates closely with the Product team to architect and uphold cutting-edge technologies that power our core platform, customer-facing API’s, and real-time events processing tailored specifically for the challenges in the supply chain industry. We tackle compelling technical hurdles, working with data from our fleet of IoT to provide real-time visibility. About the role: We are seeking a Sr. Full Stack Software Engineer to join our team developing and maintaining a sophisticated logistics and tracking platform. You'll work on a complex, multi-tenant Rails application that handles real-time shipment tracking, IoT device management, alert processing, and comprehensive reporting systems. Responsibilities: Collaborate with the Product team to design, develop, and maintain robust solutions in Ruby on Rails Implement scalable, high-performance systems for real-time events processing in the supply chain, warehouse, and inventory management domains Contribute to the ongoing improvement and optimization of our core platform and customer-facing API’s Work on diverse technical challenges, leveraging your expertise in Ruby on Rails to enhance real-time visibility and data processing from our IoT fleet Actively participate in grassroots innovation and contribute to decentralized decision-making within the engineering team Foster a data-centric mindset, ensuring that exceptional ideas are welcomed and considered, regardless of the source Requirements: 5+ years of experience in developing web applications using Ruby on Rails, HTML, CSS, JavaScript and/or similar technologies You have strong knowledge of software development fundamentals, including relevant background in computer science fundamentals, distributed systems, data storage, and agile development methodologies. You are pragmatic and combine a strong understanding of technology and product needs to arrive at the best solution for a given problem. You are highly entrepreneurial and thrive in taking ownership of your own impact. Key Technologies & Stack Backend: Ruby on Rails (mountable engines), PostgreSQL, TimescaleDB Background Processing: Resque, ActiveJob Frontend: JavaScript (ES6+), TailwindCSS, Turbo Streams Infrastructure: Redis, AWS services, Docker APIs: RESTful APIs, webhook integrations, third-party service integrations Testing: RSpec, Playwright for E2E testing Knowledge of geolocation and mapping services Familiarity with time-series databases (TimescaleDB preferred) Experience with multi-tenant application design Nice To Haves You have worked with Apache Kafka or similar service You have worked with Redis, Docker, Hotwire (Tubro + StimulusJS) Experience designing and developing products in supply chain domains.

Accelerated Computing Software Engineer - US

Mountain View, CA/Remote

We are at the forefront of revolutionizing data processing for traditional analytics and cutting-edge GenAI preprocessing. We are building an innovative data processing engine that is transforming how Apache Spark, Apache Flink, Ray and others operate on diverse, large-scale data. Our team of engineers drive and adopt advances in hardware-accelerated computing, parallel processing of large-scale data, query optimization, distributed systems, compilers, machine learning, and cloud-native computing. We are looking for specialists to join our engineering team and shape the future of accelerated data processing. The Opportunity: As a Parallel Software Engineer, you will be a key individual contributor in developing advanced parallel software that unlocks the full potential of diverse hardware accelerators including GPUs and SIMD-capable CPUs. You will enhance functional breadth, performance, scale, and reliability of data processing operators that are integral to our data processing engine. This is a unique opportunity to make a significant impact on a category-defining product and work with a talented team of engineers. What You'll Do: • Architect: Influence the architecture of how our data processing engine harnesses the parallelism in GPUs and SIMD-capable CPUs efficiently in processing diverse large-scale data. • Design: Lead design of functional and performance enhancements to operators and functions that are accelerated by our engine. • Core Development: Individually design, implement, test, optimize, and maintain parallel implementations of operators and functions on diverse acceleration hardware. • Innovation and Differentiation: Analyze advances in accelerated computing hardware, programming models, and related tools and ensure our engine extends technology and product leadership. • Collaboration: Partner effectively with the execution engine engineering team in integrating parallel software components with the overall engine. • Continuous Improvement: Foster best practices in design and code reviews, testing, CI/CD, and issue resolution to maintain highest product quality, security, efficiency, & productivity. What You'll Bring: • Bachelor's degree in Computer Science or a related field with 5+ years of relevant experience OR a Master's degree in Computer Science or a related field with 3+ years of relevant experience. • 3+ years of deep technical experience in developing production applications that process large scale data using SIMD-extensions of CPUs (e.g., AVX), GPU programming models (e.g., CUDA, ROCm), or equivalent accelerated computing framework. • Demonstrated experience working with software libraries, development tools, and profiling tools specific to parallel and accelerated computing. • Demonstrated experience in troubleshooting and resolving functional and performance anomalies in both pre- and post-production scenarios. • Strong knowledge of computer architecture. • Exceptional programming skills in C, C++. • Extensive development experience in Linux environments. • Strong analytical and problem-solving skills with a passion for performance optimization. Location Considerations: Mountain View, CA or Remote

Accelerated Computing Software Engineer - India

Hyderabad, India/Remote

We are at the forefront of revolutionizing data processing for traditional analytics and cutting-edge GenAI preprocessing. We are building an innovative data processing engine that is transforming how Apache Spark, Apache Flink, Ray and others operate on diverse, large-scale data. Our team of engineers drive and adopt advances in hardware-accelerated computing, parallel processing of large-scale data, query optimization, distributed systems, compilers, machine learning, and cloud-native computing. We are looking for specialists to join our engineering team and shape the future of accelerated data processing. The Opportunity: As a Parallel Software Engineer, you will be a key individual contributor in developing advanced parallel software that unlocks the full potential of diverse hardware accelerators including GPUs and SIMD-capable CPUs. You will enhance functional breadth, performance, scale, and reliability of data processing operators that are integral to our data processing engine. This is a unique opportunity to make a significant impact on a category-defining product and work with a talented team of engineers. What You'll Do: • Architect: Influence the architecture of how our data processing engine harnesses the parallelism in GPUs and SIMD-capable CPUs efficiently in processing diverse large-scale data. • Design: Lead design of functional and performance enhancements to operators and functions that are accelerated by our engine. • Core Development: Individually design, implement, test, optimize, and maintain parallel implementations of operators and functions on diverse acceleration hardware. • Innovation and Differentiation: Analyze advances in accelerated computing hardware, programming models, and related tools and ensure our engine extends technology and product leadership. • Collaboration: Partner effectively with the execution engine engineering team in integrating parallel software components with the overall engine. • Continuous Improvement: Foster best practices in design and code reviews, testing, CI/CD, and issue resolution to maintain highest product quality, security, efficiency, & productivity. What You'll Bring: • Bachelor's degree in Computer Science or a related field with 5+ years of relevant experience OR a Master's degree in Computer Science or a related field with 3+ years of relevant experience. • 3+ years of deep technical experience in developing production applications that process large scale data using SIMD-extensions of CPUs (e.g., AVX), GPU programming models (e.g., CUDA, ROCm), or equivalent accelerated computing framework. • Demonstrated experience working with software libraries, development tools, and profiling tools specific to parallel and accelerated computing. • Demonstrated experience in troubleshooting and resolving functional and performance anomalies in both pre- and post-production scenarios. • Strong knowledge of computer architecture. • Exceptional programming skills in C, C++. • Extensive development experience in Linux environments. • Strong analytical and problem-solving skills with a passion for performance optimization. Location Considerations: Hyderabad, India or Remote

Senior Backend Engineer (Golang) - India

Hyderabad, India - Onsite

We are a new-age AI-First Digital and Cloud Engineering services company that drives Agility and Relevance for our client’s success. Powered by cutting-edge technology solutions that enable new business models and revenue streams, we help our clients achieve their trajectory of growth. Agility is a core muscle, an integral part of the fabric of a modern enterprise. To succeed in an ever-changing business environment, every modern organization needs to adapt and renew itself quickly. We help foster a more agile approach to business to reconfigure strategy, structure, and processes to achieve more growth and drive greater efficiencies. Relevance is timeless and is the only way to survive and thrive. The quest for relevance defines the exponential acceleration of humanity. This has presented us with a slew of opportunities, but also many unprecedented challenges. With technology-led innovation, we help our customers harness these opportunities and address myriad challenges. Key Responsibilities: Design, build, and maintain backend services written in Go (Golang), including RESTful APIs, microservices, and event-driven architectures. Work with message queues and streaming platforms such as Kafka, RabbitMQ, etc. Manage and optimize relational databases (e.g., PostgreSQL, MySQL) and/or NoSQL / Graph databases; design schemas, optimize queries, and ensure data integrity and performance. Apply security best practices (authentication, authorization, encryption, data handling, etc.) in backend services. Ensure code quality through solid architectural design, modularization, thorough testing (unit and integration), and code reviews. Contribute to DevOps / infrastructure tasks, including containerization (Docker), orchestration (Kubernetes), CI/CD pipelines, and deployments. Monitor services, maintain logging/observability, and debug/troubleshoot production issues effectively. Collaborate with cross-functional teams to deliver scalable and reliable backend solutions. Required Skills & Qualifications: 4-10 years of hands-on experience in backend development using Go (Golang). Practical experience with Kafka, RabbitMQ, or similar message brokers. Strong knowledge of relational databases, with additional experience in NoSQL or Graph databases. Working knowledge of any Graph Database (e.g., Neo4j, TigerGraph, ArangoDB) is a plus. Familiarity with containerization, Kubernetes, and CI/CD pipelines. Strong understanding of security practices in backend development. Bachelor’s degree in Computer Engineering or a related field. Good to Have: Certifications in backend technologies or cloud platforms.

Submit Your Application

We appreciate your interest in joining the team! Please fill out the sections below and attach your resume.

bottom of page