Fardeen Khimani

Rounds.so - Technical Interview that AI Can't Do (But Your Brain Can)

We built the first technical interview AI can’t pass. (yes, no ChatGPT or InterviewCoder) CodeBrain is a technical assessment that tests on-the-job coding and problem solving and AI can't do it. Used to find and hire the smartest and most deserving engineers.

Add a comment

Replies

Best
Fardeen Khimani

We k*lled LeetCode and InterviewCoder

Interest for our first interview assessment has reached to over 200,000 people. PH is the only way to get access to the second assessment: Debugging Simulator, where engineers debug their way out of a simulation!

Start -> CodeBrain by Rounds.so is the new technical assessment for assessing software engineering candidates.

Why This?
This exciting, different, and a long overdue change because:
Tests relevant, on-the-job skills.
AI can't solve the problems.
Scalable, cost and time efficient.
Induces high quality interview conversations/ problem solving.

Why Now?
The current interview system rewards memorizing obscure coding questions (LeetCode) and cheating on tech interviews with AI (InterviewCoder). Frustrated companies, applicants, and rampant mis-hires.

For Who?
This benefits software engineers and tech companies because it promotes a fair, merit-based way to test problem solving and engineering skills in a cheap and AI-proof way. If you've got the skills and smarts, your dream tech job is now one step closer!

How To Get Started:
We are in beta. The ProductHunt Community gets exclusive access to our awaited second assessment's launch: The Debugging Simulator AI. For access to the first assessment, see: https://www.linkedin.com/posts/f...

Thank you so much. We are here to answer questions, concerns, and anything else all day and if you comment we will send you Version 1!

Masum Parvej

@fardeen_khimani Finally, something tougher than AI cheat codes 😂 Love it!

Fardeen Khimani

@masump Thank you! We'd love for you to give it a try!

Sarthak Mohanty

@fardeen_khimani Such a good idea Fardeen! Eagerly waiting to see how it evolves...I see lots of use-cases.

Just spitballing: another avenue to explore is rapid decision-making. One thing humans are good at is reading and understanding information (especially visual) quickly! This could be as simple as adding a time limit to existing problems, or as complex as Optiver's Zap-N test.

BTW if it helps, one great interview problem I've seen great founders use at some now $xB startups is to present a new research paper and ask the candidate to analyze the paper and either connect it to other papers, try to implement parts of it, or just elaborate on things they found interesting about the paper.

Fardeen Khimani

@sarmohanty Thanks Sarthak. I do agree that some sort of time element should be added in the production version. We are also thinking of making it so a problem gets iteratively harder / more variations as the interviewee solves each puzzle.


Ooo I didn't think about the research paper presentation aspect - it would help test raw analytical / creative ability. I'll look more into how to standardize that kind of a question.


Thanks for your time and energy Sarthak :)

Rafay Syed

Hey everyone! This is Rafay, Founder of Rounds.so.


We are really excited to unveil CodeBrain by Rounds.so. We have been working long and hard to make this become a reality, and we cannot wait for you to get your hands on it.


With the rise of AI, there have been many tools created that are allowing people to cheat during interviews. With the first version of CodeBrain, we have problems that are unsolvable by AI, therefore forcing people to use their brain to complete the problems. AI can be used to assist them, but it can only go so far with this assessment.


We have many more features coming, such as testing for AI/ML concepts and live debug simulations.


The idea behind Rounds.so first started when I thought about how interview experiences could be democratized, where people can showcase their interview experiences across many companies. A few months after the initial MVP was created for Rounds.so, Fardeen joined as a cofounder as he was able to heavily relate to the product, and now we've been working on these assessments to change the way that interviews are done with software engineers across the industry. Each module will test different skills that are required for assessing a software engineer.


We are really excited to have you all try it out!

Konrad S.
💎 Pixel perfection

Interesting!

  • How do you make sure AI can't solve the problems?
    If current AIs can't solve them, can we not expect future AIs will be able to solve them soon?

  • Just tried it. UI needs to be improved (e.g. code pane too narrow, other too wide)

  • Currently working on #1 Red Tape, not that easy, how fast should I be able to solve it?

  • Does everyone see the same problems? How do you make sure people don't put solutions online and others cheat using them?

Fardeen Khimani

@konrad_sx Hi Conrad

  1. The AI-proofness comes from the nature of the problem. This is actually a very famous research problem that if you make an AI for, you get 1 million dollars!

  2. You're absolutely right about the UI, we made it over the weekend :)

  3. These problems require a creative solution often, we will be releasing the solutions to all of them soon!

  4. Everyone sees the same problems for this initial drop (we have infinite problems like this tho 2000 already from the dataset, which eliminates memorization). So tag and share away!

Konrad S.

@fardeen_khimani Thanks, sounds great!

So you have some algorithm to create new such problems? Do you use AI for that?

Fardeen Khimani

@konrad_sx Nope, its from ARC-AGI dataset!

Fardeen Khimani

@konrad_sx However our debug simulator will use AI!

Konrad S.

@fardeen_khimani Have just solved red tape.

  • what it shows as "input" is actually the correct output (or something else if I use incorrect solution)


Fardeen Khimani

@konrad_sx hmm can you send me your code?

Konrad S.

@fardeen_khimani where should I send it?

Harini

Cool product! What is the vision for Rounds.so and what is the edge that it has over LC and interview coder ?

Rafay Syed

@hpnotharini thank you so much! With Rounds, we want software engineers to be tested in many different aspects and not just in one. For example, Leetcode only tests one aspect which is data structures and algorithms. With Rounds, software engineers can be tested in their debugging skills, something that companies are very much looking for and it hasn’t been scaled yet. They’ll also be tested in different ML/AI concepts and we are currently building that as well!


When it comes to Interview Coder, candidates wouldn’t be able to cheat on this type of interview, especially with our version 1 assessment. This is because it uses visual representations that AI is still struggling to solve. Therefore, candidates can leverage AI to assist them, but they cannot depend on it completely. We see Rounds.so as the place where software engineering candidates can be tested in a 360 degree view and at scale.

Harini

How can aspiring SWE/AI engineers leverage the assessments to their advantage ?

Rafay Syed

@hpnotharini as candidates practice on Rounds.so, they’d be able to effectively measure how well they’re real-world skills and problem solving capabilities could be, compared to just doing one aspect such as Leetcode. There are multiple platforms that test for different software engineering skills, but with this, aspiring SWE/AI engineers can have a better idea on where they stand in multiple areas. We are also working on gamifying the assessments to make it more fun and challenging. We’ve already received great feedback from people who’ve taken our first assessment and they are enjoying it and feel that it’s helping them think more outside the box.

Brian Halmherst

When will the debug sim be released?

Rafay Syed

@brian_halmherst currently working on it! We should have an estimated release date soon!

Belal Ahmed

Very cool! Excited to see this release! 🚀

Fardeen Khimani

@belal_ahmed7 Thank you Belal!