Associate Data Analyst - Builders Program
Why Tamara?
We’re proud to be Saudi’s first FinTech unicorn.
Our mission is to help people own their dreams by building the most customer-centric financial super app in the world. & There is no playbook for that; our Tamarians are writing it. Our teams are made up of innovators, problem-solvers, and learners we thrive on curiosity and collaboration.
If this sounds like you: curious, driven, and ready to build, we’d love to meet you
Apply now and join the next generation of Builders!
About the Program:
At Tamara, we believe exceptional talent deserves an exceptional launchpad.
Our Flagship Builders Program is designed for ambitious graduates ready to step into real responsibility from day one. This isn’t a rotational “observer” program, it’s a career accelerator built for those who want to build, own, and raise the bar early.
Designed for recent graduates and early-career talent with up to two years of experience, the program places you directly into high-impact roles across Product, Engineering, Design, and beyond. You’ll contribute immediately and grow at an accelerated pace.
From Product to Engineering, Design to Commercial, you’ll tackle meaningful challenges that shape how millions experience fintech across the region. You’ll be trusted with ownership, surrounded by high-caliber peers, and mentored by leaders who expect excellence.
Our January and June cohorts are your opportunity to move fast, think big, and start building what’s next - not someday, but now.
About the role:
We're looking for a fresh graduate or early-career Data Analyst on an analytical engineering path.
This role blends the best of data analysis and data engineering. You will help turn raw data into trustworthy, well-modeled datasets that:
- make definitions consistent (so "the number" means the same thing everywhere)
- improve data quality and reliability
- enable self-serve analytics and AI-assisted insights across teams (not just dashboards)
You will also help make Tamara’s data AI-ready by building well-defined datasets, metrics, and documentation that can be safely used by AI tools (and people) across the company.
With the advancement of AI, we value people who have strong fundamentals and clear thinking. Understanding data structures, measurement, tradeoffs, and how to validate results matters more than memorizing tools. You'll learn how to use AI responsibly to move faster, while still owning correctness, data quality, and interpretation.
You will collaborate with a diverse ecosystem of engineers, product experts, and business to solve real problems that impact our customers and business outcomes.
Your responsibilities
- Model and transform data for analysis
- Build and maintain clean, reusable datasets (fact and dimension tables) that power reporting and self-serve analytics.
- Contribute to a scalable metrics layer: define, document, and align business definitions (for example, "active user", "approval rate", "default rate").
- Support analytics and decision making
- Answer ad-hoc questions with clear analysis and explainable methodology.
- Turn common questions into reusable, self-serve assets: dashboards and AI-enabled workflows (a curated dataset + definitions that an AI assistant can query correctly, plus validated example analyses).
- Create AI-friendly data products (well-defined datasets, metrics, and documentation) that teams can query through AI tools.
- Enable AI-ready analytics
- Package datasets and metrics so they can be reliably used by AI tools (clear grain, business definitions, data contracts, examples).
- Write AI-friendly documentation: glossary, metric definitions, common queries, and pitfalls.
- Partner with AI and platform teams to ensure critical tables are discoverable, permissioned correctly, and safe to use.
- Ensure data quality and reliability
- Write basic tests, checks, and monitoring for key datasets and critical metrics.
- Troubleshoot data issues and improve reliability from source to reporting.
- Work effectively with engineering and product
- Collaborate with data engineers on schema changes, event tracking, and pipeline improvements.
- Translate ambiguous business questions into measurable analyses, and communicate findings clearly.
- Use AI tools thoughtfully
- Use AI to accelerate SQL drafting, code scaffolding, and documentation.
- Validate AI outputs, document assumptions, and protect sensitive data.
Your expertise (must have)
- Fresh graduate or < 1 year of relevant experience (internships, projects, or part-time roles count).
- Solid SQL fundamentals (joins, aggregations, basic window functions).
- One programming language for analysis (preferably Python) with basic skills in:
- Data manipulation (tables/dataframes)
- Basic statistics (distributions, sampling intuition, confidence basics)
- Strong analytical thinking:
- Ability to define a problem, form hypotheses, validate data, and explain results.
- Strong attention to detail and commitment to accurate, reliable outputs.
- Ability to work effectively in a team-oriented environment.
Nice to have
- Exposure to data modeling concepts (star schema, slowly changing dimensions, metrics definition).
- Familiarity with modern analytics stacks (dbt, BigQuery, Snowflake, Looker, PowerBI, Tableau) through coursework or projects.
- Experience creating AI-ready data assets (clean semantic layers, metric definitions, data contracts, documentation, and evaluation or sanity-check checklists) is a plus.
- Experience using AI assistants responsibly to accelerate analysis or analytics engineering work (for example, SQL drafting, code scaffolding, documentation).
- Experience with version control (Git) or basic software engineering practices.
- Understanding of event tracking and product analytics (funnels, cohorts, retention).
- Knowledge of responsible data handling (PII basics, access controls, safe sharing).
What success looks like
- You can independently produce a well-structured analysis with clear assumptions and validation steps.
- You contribute at least one reliable dataset or transformation that becomes a shared building block for analytics.
- Stakeholders can answer more questions with self-serve and AI-assisted exploration, with less back-and-forth.
- A key dataset or metric you built becomes usable through AI tools with consistent answers (validated against a source-of-truth definition).
- You can spot when results look off, debug quickly, and explain the root cause.
Create a Job Alert
Interested in building your career at Tamara? Get future opportunities sent straight to your email.
Apply for this job
*
indicates a required field
