Find A Job

Senior Database Administrator (Hadoop/Linux)

Phoenix, AZ | IT
Job ID: 73008
Listed on 7/11/2017

Senior Database Administrator (Hadoop/Linux)

KellyMitchell is an award-winning, technical staffing firm serving Fortune 500 and high-tech organizations on a global scale. With 16 nationwide offices, two national recruitment centers, one world headquarters, and employees in every state – KellyMitchell connects people with opportunities and clients with success. 

We are currently seeking a Senior Database Administrator to support one of our clients.  This position is based in Phoenix; however, it can be worked remotely from any location.

Job Description:

 

Assists IT management in prioritizing database administration team projects. Daily monitoring of database storage allocation and usage as well as other resource usage. Assists with backup standards and schedules and recovery procedures. Security plan development, testing and maintenance. Intent of this security plan is to establish a means of providing complete accountability for any and all use of the databases. Provides performance turning related to indexing, stored procedures, triggers and database/server configuration. Provides Technical guidance to teammates. Exports/Imports data and data replication. Job and Events scheduling. Manage special projects.

Job Requirements:

Develop data expertise, be a data steward evangelist, and own data ingestions and transformations
Design and develop extremely efficient and reliable data pipelines to move terabytes of data into the Data Lake and other landing zones
Use expert coding skills in Hive H-SQL, T-SQL, PL/SQL
Develop and implement data auditing strategies and processes to ensure data accuracy and integrity
Assist in construction of data lake infrastructure
Mentor and teach others
Solid Linux skills
Familiarity with data formats and serialization, XML, JSON, AVRO
ETL/ELT tools and design
Demonstrated experience[DE] in implementing complex ETL batch and near Real-time workloads on Hadoop using Cloudera Distribution for Hadoop (Hive, Sqoop, Spark)
DE in Java, Scripting (Perl, Python, UNIX)
DE in SQL [PL/SQL or PostGres Sql]
Excellent communication and Presentation Skills
Design and build data processing pipelines for structured and unstructured data using tools and frameworks in the Hadoop ecosystem.
Implement and configure tools for Hadoop-based data lake implementations and Proof of concepts.
Solid software engineer with excellent analytical and troubleshooting skills.

Requirements/Certifications:

  • 10+ years of hands on technical experience; 5+ years of commercial database or data warehousing technology experience.
  • 2+ years of experience building production large-scale Big Data applications.
  • Experience with Big Data technologies like Spark, Hive, or Impala.
  • Knowledge of Windows and Linux servers, database storage (device/database creation) including Tables, Record, Fields, Indexes, Triggers. Knowledge of transaction logs, checkpoints, and disaster recovery.
  • Previous experience writing SQL queries and experience with stored procedures. Knowledge of Data Management.
  • Familiarity with Import/Export routines/technologies.
  • Experience working in an Agile Environment.
  • Bachelor's Degree