Senior Software Engineer, Data

ID
2019-11627
Category
International
Position Type
Full-Time
Location : Location
IN-KA-Bengaluru

About Blackhawk Network:

At Blackhawk Network, we shape the future of global branded payments through the prepaid products, technologies, and networks that connect brands and people. Our collaborative innovation and scalable, security-minded solutions help our partners to increase reach, loyalty, and revenue. We believe our future holds great things for Blackhawk Network and its partners. We believe that together, we can shape the future. Our beliefs? Win as one team, be innovative, global excellence and be inspiring!

So, what are you waiting for? Shape your career and join our global network.

Overview:

Blackhawk Network is building a digital platform and products that bring people and brands together.  We facilitate cross channel payments via cash-in, cash-out and mobile payments. By leveraging blockchain, smart contracts, serverless technology, real time payment systems, we are unlocking the next million users through innovation. 

 

Our employees are our biggest assets!  Come find out how we engage, with the biggest brands in the world.  We look for people who collaborate, who are inspirational, who have passion that can make a difference by working as a team while striving for global excellence. 

 

As a leader in branded payments, we are building a strong diverse team and expanding in ASIA PACIFIC –we are hiring in Bengaluru, India! This is an amazing opportunity for problem solvers who want to be a part of an innovative and creative Engineering team that values your contribution to the company. If this role has your name written all over it, please contact us apply now with a resume so that we explore further and get connected. 

 

Do you thrive in processing and analyzing data at Scale?  You believe in data driven decisions and are able to deliver expert insights from financial data?  Have you been seeking to work with world class, global scale payment processing and seamlessly handle the three golden V’s of financial data – Variety, Volume and Velocity?  Blackhawk is seeking passionate data engineers at all levels who thrive in the complexities of big data processing. 

 

What’s in it for you? 

 

You will tame all facets of big data at a Global scale and turn the wild west of data into robust and accurate business insights. The data platform that you build will enable the culture of data driven decision making across the entire company and will work at the forefront of cutting-edge technologies of data processing, visualization, and business intelligence. 

 

Responsibilities:

 

  • You will build a data lake at scale that will consume financial data from Blackhawk’s global business platforms, process and deliver real time insights to Blackhawk’s internal and external customers.  
  • You will build a best-in-class data platform that is able to deliver quality, accuracy and performance. You will bring your architecture, design and customer focus skills to build the future of data at Blackhawk.  
  • You will build visualization and data intelligence tools that will empower users to draw powerful business insights. You will collaborate with world class product development and business teams and develop solutions.
  • You will push the envelope of scale and performance and will constantly look for opportunities to leverage horizon technologies to deliver business value. 

 

Qualifications:

  • 5+ years of experience in software development, 3+ years working with Data processing 
  • Strong computational skills and being able to code fluently in Java or Python 
  • Demonstrated ability in large scale data processing and insights  
  • Strong knowledge of SQL 
  • Strong experience in data processing tools - Hadoop, spark, python, pig 
  • Experience in ETLdata warehousing, data mining, and data modelling, reporting and analytical tools 
  • Self-starter and a collaborator having the ability to independently acquire the knowledge required in succeeding his job. 

 

Preferred: 

  • Basic experience working with Amazon EMR & Spark 
  • Prior experience in implementing large scale data lake preferably with amazon or google data processing technologies 
  • Strong knowledge and experience in most of the following AWS services: EC2, Lambda, SQS, Kinesis, S3, CloudFormation, CLI, CloudWatch 
  • Strong knowledge of SQL optimization 
  • Experience working with column-based databases like Redshift 
  • Experience in building real-time ETL processes 
  • Experience developing on Scala, NodeJS, Python 
  • DevOps experience: Linux, Jenkins, Docker, Kubernetes 

 

#LI-KS1

Options:

Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
Share on your newsfeed