Verizon is a leading provider of technology, communications, information and entertainment products, transforming the way we connect across the globe. We’re a diverse network of people driven by our ambition and united in our shared purpose to shape a better future. Here, we have the ability to learn and grow at the speed of technology, and the space to create within every role. Together, we are moving the world forward – and you can too. Dream it. Build it. Do it here.
What you’ll be doing...
As part of the Artificial Intelligence and Data Organization (AI&D), you will drive various activities including data engineering, data frameworks and platforms to improve the reusability, operational efficiency, customer experience and profitability of the company. You will analyze marketing, customer experience and digital operations environments to build reusable data products, transform data into actionable intelligence. You will turn raw data into usable data pipelines and build data tools and products for effort automation and easy data accessibility.
At Verizon, we are on a multi-year journey to industrialize our data science and AI capabilities. Very simply, this means that AI will fuel all decisions and business processes across the company. With our leadership in bringing 5G network nationwide, the opportunity for AI will only grow exponentially in going from enabling billions of predictions to possibly trillions of predictions that are automated and real-time
Architect, design, implement and develop the enterprise data products, batch & real-time streaming data Platforms & ecosystems.
Design and implement scalable data curation solution to migrate semantic layer from on-prem to cloud data platform.
Build data expertise across all domains within Verizon internal data platforms and train other data architects and developers
Gather requirements, assess gaps, and build roadmaps and architectures to help the analytics driven organization achieve its goals.
Work closely with Data Analysts to ensure data quality and availability for analytical modelling.
Identify gaps and implement solutions for data security, quality, and automation of processes.
Collaborate with cross-functional teams to source new data, develop schema requirements, and maintain metadata.
Build business ready data with reusable fact and dimension tables for reports and analytics.
Identify ways to improve data reliability, efficiency and quality.
Use data to discover tasks that can be automated.
Drive the data curation strategy across all data platforms.
What we’re looking for...
You will need to have:
Bachelor’s degree or four or more years of work experience.
Six or more years of relevant work experience.
Four or more years of work experience in Public cloud data lakes, data warehouses & analytics services experience.
Four or more years of work experience in Database experience in Teradata SQL and NoSQL.
Experience in designing, building, and deploying reusable data products using tools from Hadoop stack (HDFS, Hive, Spark, HBase, Kafka, Oozie etc.) and programming in Scala/Python.
Experience with Cloud technologies like GCP, Docker, Kubernetes and data engineering migration programs from on-premise to cloud big data platforms.
Dashboard development experience in Tableau, Qlik and/or Looker.
Even better if you have one or more of the following:
Master’s degree in Computer Science, Information Systems and / or related technical discipline.
Ten or more years of relevant work experience in the big data space on technologies like Hadoop, Spark, Hive, Kafka, Oozie, ELK, Ranger, Atlas, Presto etc.
Big Data Analytics Certification in Public Cloud.
Knowledge of telecom architecture.
Experience leveraging and managing CI/CD toolchain products like Jira, STASH, Git, Bitbucket, Artifactory, and Jenkins in Data Engineering projects.
Experience to working with a distributed team.
Ability to effectively communicate through presentation, interpersonal, verbal and written skills.