Chattanooga, Tennessee, About the Position
The Sr Data Engineer will develop and support next generation architecture for the advanced supply chain analytics offering, covering both Data Engineering and Business Intelligence (BI). The job will design, create, manage, and use of large datasets across a variety of data platforms. The position crafts, implements, and operates stable, scalable, low-cost solutions to replicate data from production systems into the BI data store.
Functions
Partner with key distribution, transportation, and material handling business managers to design and implement efficient and scalable Extract, Transform, Load (ETL) solutions
Assist in developing and improving the current ETL/BI architecture, emphasizing data security, data quality and timeliness, scalability, and extensibility
Design low latency data architectures at scale to enable data driven decision-making
Delivery of performant and scalable data products and platforms, both Online Transactional Processing (OLTP) and Online Analytical Processing (OLAP), to consume, process, analyze and present data from the data ecosystem, while driving data engineering best practices Ensure the integrity and uninterrupted processing of batch and real-time data pipelines to ingest and process data from various internal data sources and third-party platforms Work closely with Data Science and Business Intelligence team to develop and maintain company-level Key Performance Indicators (KPIs), accompanying core metrics and develop architecture to drive business growth
Qualifications
Bachelor’s degree in Computer Science. Master’s or PhD degrees preferred 8-10 years Data Engineering experience: proven track record working with large datasets and closely working with data analysts, software and infrastructure engineers
Expert level skills writing/optimizing complex SQL and schema buildout with structured and unstructured data, including star schemas, constellations, and snowflake schemas Proficiency with languages like Python, Ruby, or Java Strong background in working with Big Data technologies like: Spark, Kafka, Hadoop, etc.
Experienced in developing and delivering on technical roadmaps and architectures for platforms or cross-functional problems that impact multiple teams
Experience in creating data warehousing, data lakes, and data marts Experience in data mining, profiling, and analysis Experience with workflow orchestration tools and rules-driven process automation engine systems
Good understanding of Continuous Integration (CI)/ Continuous Delivery (CD) principles
Demonstrated track record of dealing with ambiguity, prioritizing needs, and delivering results in a dynamic business environment
Proven ability to develop unconventional solutions; Sees opportunities to innovate and leads the way
Experienced in designing data solutions utilizing AWS cloud data platforms and tools
Experience in creating scalable architecture to enable development of data driven products preferred
Experience in creating/supporting decision support dashboard/product preferred
Experience in mentoring BI or ETL team preferred
Competencies
Business Acumen - Knowledgeable in current and possible future policies, practices, trends, technology, and information affecting his/her business and organization. Communicate for Impact - Proactively communicate with all stakeholders throughout the life cycle of programs and projects. Influencing Others - Can quickly find common ground and can solve problems for the good of the organization with a minimal amount of noise. Authentically gains trust and support of peers. Managing Transitions/ Change Management - Effectively plans, manages and communicates changes in processes with appropriate stakeholders. Strategic Agility - Enable Kenco to remain competitive by adjusting and adapting to innovative ideas necessary to support Kenco’s long-term organizational strategy. Travel Requirements
This position is expected to travel approximately 25% or less
A passport is not required, but recommended.
Bachelor’s degree in Computer Science. Master’s or PhD degrees preferred 8-10 years Data Engineering experience: proven track record working with large datasets and closely working with data analysts, software and infrastructure engineers
Expert level skills writing/optimizing complex SQL and schema buildout with structured and unstructured data, including star schemas, constellations, and snowflake schemas Proficiency with languages like Python, Ruby, or Java Strong background in working with Big Data technologies like: Spark, Kafka, Hadoop, etc.
Experienced in developing and delivering on technical roadmaps and architectures for platforms or cross-functional problems that impact multiple teams
Experience in creating data warehousing, data lakes, and data marts Experience in data mining, profiling, and analysis Experience with workflow orchestration tools and rules-driven process automation engine systems
Good understanding of Continuous Integration (CI)/ Continuous Delivery (CD) principles
Demonstrated track record of dealing with ambiguity, prioritizing needs, and delivering results in a dynamic business environment
Proven ability to develop unconventional solutions; Sees opportunities to innovate and leads the way
Experienced in designing data solutions utilizing AWS cloud data platforms and tools
Experience in creating scalable architecture to enable development of data driven products preferred
Experience in creating/supporting decision support dashboard/product preferred
Experience in mentoring BI or ETL team preferred