The Big Data Architect role falls into the Data Management & Business Intelligence practice area at CapTech, through which our consultants provide a broad spectrum of services to help our clients define and implement a strategy to deliver lasting and mission-critical information capabilities.
Interpret and deliver impactful plans that specify strategy and improve data integration, data quality and data delivery in support of big data business initiatives and roadmaps to achieve results. Collaborate with end users, development staff, and business analysts to ensure that prospective data architecture plans maximize the value of client data across the organization. Articulate architectural differences between big data solution methods and the advantages/disadvantages of each. Set standards for data management and conceive projects needed to eliminate the gap between current state and future goals. Manage the approval and acceptance process for the technical architecture in cooperation with the client. Hands-on project and development work, as demanded by the project and client role.
Across our company, we look for individuals who are self-starters with a willingness and desire to learn. You must be able to work effectively within a team to solve problems and be able to adapt quickly to our clients’ evolving needs. All roles within the company have the potential to be client-facing, so it is critical that our consultants have the ability to partner with clients to build trusting relationships, communicate effectively, and provide valued guidance to ensure their success. Specific qualifications for the Big Data Architect position include: ● Ability to think strategically and relate architectural decisions/recommendations to business needs and client culture. ● Knowledge of how to assess the performance of data solutions, how to diagnose performance problems, and tools used to monitor and tune performance. ● Deep understanding of data warehouse approaches, industry standards, and industry best practices ● Extensive experience with very large data repositories (terabyte scale or larger). ● High level understanding of Hadoop framework and Hadoop ecosystems. ● Hands on experience with at least one Hadoop distribution. ● Knowledge of MapReduce and MapReduce platforms like Pig or Hive. ● Development experience with Big Data/NoSQL platforms, such as HBase, MongoDB or Apache Cassandra. ● Experience performance tuning RDMS and NoSQL databases. ● Expert knowledge of SQL and NoSQL tools. ● Deep understanding of Unix platforms and scripting languages such as Perl and/or Python. ● Java experience. ● Excellent interpersonal, team management, facilitation and communication skills; Must be able to communicate effectively at all levels of the client’s organization. ● Experience with the full Systems Development Life Cycle (SDLC).