Remote Staff Data Software Engineer at Lookout Inc
About the Role
We are looking for a Remote Staff Data Software Engineer to join our innovative team at Lookout Inc. In this role, you will play a crucial part in developing our data engines, ETL pipelines, and analysis services. As a part of our Data Engineering team, you will help safeguard data across devices, apps, networks, and clouds through our unified, cloud-native security platform.
What You'll Do
- Design and implement scalable data pipelines and ETL processes to support data ingestion and transformation.
- Collaborate with cross-functional teams to understand data requirements and deliver robust solutions.
- Analyze and aggregate data to provide insights that drive business decisions.
- Contribute to the development of core intellectual property related to data services.
- Ensure the reliability and performance of data systems through monitoring and optimization.
Requirements
- 5+ years of experience as a Data Software Engineer or in a similar role.
- Strong proficiency in programming languages such as Python, Java, or Scala.
- Experience with data processing frameworks like Apache Spark or Hadoop.
- Familiarity with cloud platforms such as AWS, Azure, or Google Cloud.
- Solid understanding of database technologies, both SQL and NoSQL.
Nice to Have
- Experience with machine learning and data science concepts.
- Knowledge of data governance and security best practices.
- Familiarity with containerization technologies like Docker and Kubernetes.
What We Offer
- Competitive salary ranging from $120,000 to $150,000 per year.
- Flexible remote work environment.
- Comprehensive health benefits and wellness programs.
- Opportunities for professional development and growth.
- Collaborative and inclusive company culture.
This Remote Staff Data Software Engineer position at Lookout Inc offers a unique opportunity to work on cutting-edge data solutions in a flexible remote environment with a competitive salary.
Who Will Succeed Here
Proficient in designing scalable ETL pipelines using Apache Spark and Hadoop, with a strong grasp of Python and Java for data manipulation and transformation.
Strong experience in cloud computing environments, particularly AWS and Google Cloud, demonstrating the ability to architect and deploy data solutions in a fully remote setting.
A strategic mindset with a focus on performance optimization in SQL and NoSQL databases, ensuring efficient data retrieval and processing for real-time analytics.
Learning Resources
Career Path
Market Overview
Skills & Requirements
Domain Trends
Industry News
Loading latest industry news...
Finding relevant articles from the last 6 months