At Falkon we're building a revenue growth platform. Our mission is to transform business operations through insights and automation.

Our target customer is revenue growth teams (sales, marketing, customer success and customer support). Falkon's capabilities help them understand their prospect and customer behavior so they can market and sell more effectively to those customers. By combining data analytics with data science, our platform provides powerful tools to solve use cases like:

  • Understanding what marketing/sales tactics are working and what are not
  • Understanding what content is leading to activated, converted and engaged customers
  • Understanding what rep behaviors are truly making a different to the sales pipeline
  • Understanding whether the company is on track on meet its revenue targets, and drill down into the detractors
  • Targeting the most likely candidate customers for deal expansion
  • And many more...


Our team consists of product, engineering and research veterans from Microsoft, Amazon, Dropbox, Amperity and Zulily.

We are looking for a results-oriented backend engineer who can design, develop and scale the internal systems that power Falkon's core product from scratch. This includes

  • building programmable data ingestion and processing pipelines that are self-serve for every tenant
  • build infrastructure components to quickly process and transform terabytes of data
  • building product capability that integrates data science models that crunch customer data to provide estimates and forecasts
  • work on our segmentation engine that allows for lightning fast drill-down across thousands of fields and hundreds of millions of rows
  • improve the performance of large data processing tasks
  • and many others

Our backend infrastructure is extremely modern and fully containerized on top of Kubernetes. We're big fans of ELT and use tools like DBT and Airflow to power our data pipelines.

What you will do

  • Design, build and deploy multiple critical services, integrations and data pipelines that power the Falkon analytics system, processing many terabytes of data and producing billions of data points every day.
  • Build the large-data processing infrastructure that enables Falkon to scale horizontally.
  • Build the next generation revenue growth platform from scratch.
  • Help shape Falkon's culture, and build the workplace of your dreams.

What you will need to succeed

  • Alignment on Falkon principles - Think big, Deliver results with urgency, Be radically transparent, Follow the golden rule, Get better every day
  • Ability to operate with autonomy in highly ambiguous situations.
  • Prior experience working with large data-processing systems
  • Experience working a fully-containerized service architecture on top of Kubernetes.
  • 5+ years of experience building and deploying large-scale distributed systems.
  • Solid computer science fundamentals.
  • A willingness to put in the hard work it takes to make a startup successful.

Very nice-to-haves:

  • Prior experience working with data from revenue tools like Salesforce, Hubspot and Marketo is a plus.
  • Experience building very large data ingestion/processing pipelines.
  • Experience developing/deploying/running metrics processing and monitoring systems
  • Experience working with data scientists on machine learning pipelines

If you're interested in rapid career growth, there is no better place to be than Falkon.

Growth comes from Impact x Learning

At Falkon you'll do your best work, develop new skills, learn from the best, discover what technical areas you're truly passionate about and help our customers grow their businesses. If you're interested in starting your own business, you'll get the opportunity to see how a venture-funded business is built from the ground up. As an early and critical member of our growing team you will help shape our business, our processes and our culture.