Location: Seattle, WA, US
Job Summary:
1. Job Duties
- Design and implement an offline/real-time data architecture for recommendation systems.
- Build a flexible, scalable, and high-performance storage system.
- Troubleshoot production systems and enhance stability mechanisms.
- Develop distributed systems for offline/online storage and data processing.
2. Required Skills
- Proficiency in big data processing systems (e.g., Spark/Flink).
- Strong understanding of data lake technologies (e.g., Hudi, Iceberg, DeltaLake).
- Knowledge of HDFS principles and familiarity with storage formats (e.g., Parquet, ORC).
- Proficient in programming languages such as Java, C++, Scala.
3. Required Experience
- Bachelor’s Degree in Computer Science or related fields.
- 1+ years of experience building scalable systems.
- Experience in data warehousing modeling and managing large-scale data (petabyte range).
- Familiarity with other big data systems (e.g., Hive, HBase, Kudu) is a plus.
Job URLs: