Location: Boise, ID
Job Summary:
Job Duties:
- Manage Apache Spark services and associated infrastructure
- Process large datasets efficiently
- Utilize Spark Structured Streaming for continuous data processing
- Enable ad-hoc analysis using Jupyter Notebook
Required Skills (Keywords):
- Apache Spark
- Data processing
- Spark Structured Streaming
- Jupyter Notebook
- Data engineering
- Streaming data
Required Experiences (Topics):
- Experience with large dataset management
- Proficiency in data engineering techniques
- Familiarity with real-time data processing
- Background in data analysis and visualization
Job URLs: