
Rounds.so
Technical Interviews, Redefined
76 followers
Rounds is a crowdsourced tech interview sharing site. We also developed CodeBrain, the technical interview assessment that forces your brain to code. This prevents cheating and helps anyone find and hire the best engineers.
Rounds.so
We k*lled LeetCode and InterviewCoder
Interest for our first interview assessment has reached to over 200,000 people. PH is the only way to get access to the second assessment: Debugging Simulator, where engineers debug their way out of a simulation!
Start -> CodeBrain by Rounds.so is the new technical assessment for assessing software engineering candidates.
Why This?
This exciting, different, and a long overdue change because:
Tests relevant, on-the-job skills.
AI can't solve the problems.
Scalable, cost and time efficient.
Induces high quality interview conversations/ problem solving.
Why Now?
The current interview system rewards memorizing obscure coding questions (LeetCode) and cheating on tech interviews with AI (InterviewCoder). Frustrated companies, applicants, and rampant mis-hires.
For Who?
This benefits software engineers and tech companies because it promotes a fair, merit-based way to test problem solving and engineering skills in a cheap and AI-proof way. If you've got the skills and smarts, your dream tech job is now one step closer!
How To Get Started:
We are in beta. The ProductHunt Community gets exclusive access to our awaited second assessment's launch: The Debugging Simulator AI. For access to the first assessment, see: https://www.linkedin.com/posts/f...
Thank you so much. We are here to answer questions, concerns, and anything else all day and if you comment we will send you Version 1!
@fardeen_khimani Finally, something tougher than AI cheat codes 😂 Love it!
Rounds.so
@masump Thank you! We'd love for you to give it a try!
@fardeen_khimani Such a good idea Fardeen! Eagerly waiting to see how it evolves...I see lots of use-cases.
Just spitballing: another avenue to explore is rapid decision-making. One thing humans are good at is reading and understanding information (especially visual) quickly! This could be as simple as adding a time limit to existing problems, or as complex as Optiver's Zap-N test.
BTW if it helps, one great interview problem I've seen great founders use at some now $xB startups is to present a new research paper and ask the candidate to analyze the paper and either connect it to other papers, try to implement parts of it, or just elaborate on things they found interesting about the paper.
Rounds.so
@sarmohanty Thanks Sarthak. I do agree that some sort of time element should be added in the production version. We are also thinking of making it so a problem gets iteratively harder / more variations as the interviewee solves each puzzle.
Ooo I didn't think about the research paper presentation aspect - it would help test raw analytical / creative ability. I'll look more into how to standardize that kind of a question.
Thanks for your time and energy Sarthak :)
Hey everyone! This is Rafay, Founder of Rounds.so.
We are really excited to unveil CodeBrain by Rounds.so. We have been working long and hard to make this become a reality, and we cannot wait for you to get your hands on it.
With the rise of AI, there have been many tools created that are allowing people to cheat during interviews. With the first version of CodeBrain, we have problems that are unsolvable by AI, therefore forcing people to use their brain to complete the problems. AI can be used to assist them, but it can only go so far with this assessment.
We have many more features coming, such as testing for AI/ML concepts and live debug simulations.
The idea behind Rounds.so first started when I thought about how interview experiences could be democratized, where people can showcase their interview experiences across many companies. A few months after the initial MVP was created for Rounds.so, Fardeen joined as a cofounder as he was able to heavily relate to the product, and now we've been working on these assessments to change the way that interviews are done with software engineers across the industry. Each module will test different skills that are required for assessing a software engineer.
We are really excited to have you all try it out!
App Finder
Interesting!
How do you make sure AI can't solve the problems?
If current AIs can't solve them, can we not expect future AIs will be able to solve them soon?
Just tried it. UI needs to be improved (e.g. code pane too narrow, other too wide)
Currently working on #1 Red Tape, not that easy, how fast should I be able to solve it?
Does everyone see the same problems? How do you make sure people don't put solutions online and others cheat using them?
Rounds.so
@konrad_sx Hi Conrad
The AI-proofness comes from the nature of the problem. This is actually a very famous research problem that if you make an AI for, you get 1 million dollars!
You're absolutely right about the UI, we made it over the weekend :)
These problems require a creative solution often, we will be releasing the solutions to all of them soon!
Everyone sees the same problems for this initial drop (we have infinite problems like this tho 2000 already from the dataset, which eliminates memorization). So tag and share away!
App Finder
@fardeen_khimani Thanks, sounds great!
So you have some algorithm to create new such problems? Do you use AI for that?
Rounds.so
@konrad_sx Nope, its from ARC-AGI dataset!
Rounds.so
@konrad_sx However our debug simulator will use AI!
App Finder
@fardeen_khimani Have just solved red tape.
what it shows as "input" is actually the correct output (or something else if I use incorrect solution)
Rounds.so
@konrad_sx hmm can you send me your code?
App Finder
@fardeen_khimani where should I send it?