Data Engineer with Big Data Experience

Data Engineer with Big Data Experience jobs in Richmond VA

  • Posted on

     06-11-2020
  • Job type

     Contract
  • Job category

     Data Engineer with Big Data Experience
  • Location

     Richmond, VA
  • Minimum Experience

     5+ Years
  • Number Of Openings

    1
  • Pay Rate

    DOE
  • Description

    We are looking for Data Engineer with Big Data Experience. Develop sustainable data driven solutions with current new gen data technologies to meet the needs of our organization and business customers.

  • Desired Profile

     
    • Required 8 years of experience with the Software Development Life Cycle (SDLC).
    • Required 5 years of experience working on a big data platform.
    • Required 3 years of experience working with unstructured datasets.
    • Required 3 years of experience developing microservices: Python, Java, or Scala.
    • Required 1 year of experience building data pipelines, CICD pipelines, and fit for purpose data stores.
    • Required 1 year of experience in cloud technologies: AWS, Docker, Ansible, or Terraform.
    • Required 1 year of Agile experience.
    • Required 1 year of experience with a streaming data platform including Apache Kafka and Spark.
    • 5+ years of data modeling and data engineering skills.
    • 3+ years of microservices architecture & RESTful web service frameworks.
    • 3+ years of experience with JSON, Parquet, or Avro formats.
    • 2+ years of creating data quality dashboards establishing data standards.
    • 2+ years experience in RDS, NOSQL or Graph Databases.
    • 2+ years of experience working with AWS platforms, services, and component technologies, including S3, RDS and Amazon EMR.
  • Responsibilities

     
    • Apply domain driven design practices to build out data applications. Experience in building out conceptual and logical models.
    • Build out data consumption views and provisioning self-service reporting needs via demonstrated dimensional modeling skills.
    • Measuring data quality and making improvements to data standards, helping application teams to publish data in the correct format so it becomes easy for downstream consumption.
    • Big Data applications using Open Source frameworks like Apache Spark, Scala and Kafka on AWS and Cloud based data warehousing services such as Snowflake.
    • Build pipelines to enable features to be provisioned for machine learning models.
    • Familiar with data science model building concepts as well as consuming and from data lake.
  • Please click the link below to learn more about our company and jobs