Symbol
p/symbol
A new, universal way to prove your skills
Max Deutsch
Symbol — A new, universal way to prove your skills
Featured
20

Publish a beautiful portfolio of your work and see where you rank globally for any of your skills. Rankings are free and over 95% accurate.

Replies
Best
Max Deutsch
Hey Product Hunt! I'm excited to share Symbol with you. We've been thinking a lot about the question: "How can anyone credibly and universally prove their skills without relying on traditional credentials (i.e. degrees, job titles, years of experience)?". Symbol is one of our experiments to create a new and free "universal credential". On Symbol, you can publish a beautiful portfolio of your work and then see where you rank globally for any of your skills (you're given a percentile score from 1-99 for each skill). These rankings are free to receive and over 95% accurate (thanks to the Law of Large Numbers, explained more below). We'd love to hear what you think and see if you have any ideas to help 1. Improve the idea, 2. Create the necessary incentives to kickstart the network, 3. Other ideas to create a free universal credentialing system. ________________________________ Here's how it works, briefly: 1. Use our Medium-like editor to create a beautiful portfolio of your best work (here's an example case study: https://withsymbol.com/u/max/191...). 2. Tag each case study with the skills it demonstrates and publish the case study. (This can include tangible skills like graphic design and coding or less tangible skills like creativity and leadership). 3. Once published, your case study is anonymized and reviewed by others on the network. Reviews are done by pairwise comparisons (we ask a reviewer which of the two anonymous case studies better demonstrates the skill under review). 4. One review like this is very subjective, but once each case study is reviewed about 30-40 times directly (and thousands of times indirectly), your score converges to your "true score" with about 95% accuracy. For this to work, the "better" case study needs to be selected only about 55-60% of the time (where 50% is random), so there's a lot of baked in room for noise. *Scores are calculated using a modified version of the Glicko rating system, which is also used to compute chess rankings on Chess.com and your "hotness ranking" on Tinder, for example. We've made a number of modifications to make the algo work much better for this kind of ranking system (i.e. reviews are weighted in correspondence with the score of the reviewer for that particular skill). 5. Once your get your scores (a percentile score from 1-99), you can manage them how you want, either keeping them private or public. We have some additional info on our homepage: https://withsymbol.com/. ________________________________ Obviously, this is pretty out there, but we believe creating a free, credible, universal way to credential anybody in the world is one of the most compelling opportunities currently. At scale, it would drastically change the ways the education and job markets work. We would love if you could help us explore this idea further. Let us know if you have any feedback or thoughts, and looking forward to seeing your portfolios on Symbol!
Paul Danyliuk
Skill percents (donuts, stars etc) must be condemned. They are nothing but informational garbage, and tell nothing about what the person is / is not qualified to do. Please explain to me what does "72% of public speaking" mean. The candidate mumbles out 28 words for every 72 they say clearly? What does "87% creativity", "91% UX design" mean? How do you even measure creativity?
Paul Danyliuk
P.S. Okay, I see it’s not a percent, but percentile: you’re clashing two portfolios together and ask people to FaceMash™ them. Still, how would people judge whether one or another excels at "Creativity"? Or better yet, at "Public speaking"? What could remedy your service and make it more truthful is if you explicitly state: People found this person better in Creativity than 88% of all Symbol profiles. Perhaps replace those donuts with something like Google Play does to communicate percentiles: example. Because at the moment this all looks like those generated resumes where skill percents mean nothing.
Tom Uhlhorn
Love the idea, have tested it out and like the concept as well. In the review section, and there are two things that I think others may or may not agree with: 1. The review process is arduous! Not knowing how long it was going to take, I didn't expect it to be so detail-oriented and long! I think you may need to make it far shorter with less case studies to review at-a-time; 2. It might help to have definitions of the skills you're reviewing. I am looking at "business strategy" and I am having to use my own definition of "business strategy" as the rule of thumb. Not saying you should be authoritative, but perhaps guiding people's reviews could help keep them on-track? Is your peer review system going to be similar to that of awwwards.com in terms of weighting?
Chris Germano
A polished and engaging experience for a concept of questionable legitimacy. As others have said, it's unfair and unreliable to quantify personal skills, due to risk of gaming, psychological predispositions, and other modes of active or passive manipulation. That being said, as @viktor_cherkaskyi explained, this phenomenon will be present in any quantified ranking system (e.g. people naturally gravitate towards 7 in a 1-10 scale). So, what's to be done? Honestly, I'm not going to pretend to know. I think there needs to be a fundamental innovation within the self-assessment space, and going back and forth in between beginner/intermediate/advanced and 1-100 rankings isn't the direction we should be moving in. Symbol, in its current state, is an aesthetically pleasing, intuitive application for showcasing ones skills and could very well be used as stepping stone towards a better strategy. Perhaps a system akin to optometrist tests ("Are you more proficient in JavaScript or PHP? Are you more proficient in PHP or MySQL? Are you more proficient in MySQL or JavaScript?") could combat flaws in the current system. Just food for thought. Would I recommend this to a friend? For sure. Do I think this is an effective methodology for broadcasting self-assessment? Not so much. Excellent work, regardless.
Charles Scheuer
@valentin_perez @_maxdeutsch I appreciate your desire to build an alternative form of signalling, but I think that an effective alternative to college degrees will be more nuanced than percentile rank. Really cool design, looking forward to future iterations of this product.
Ryan Hoover
Super interesting. How do you avoid gaming?
Max Deutsch
@rrhoover Yeah, we've thought a lot about this. Curious to know which kinds of "gaming" came to your mind first. Here are some of the main things we've considered: 1. Firstly, in order to get your score, your case studies need to be public, which we think helps with the accurate self-reporting (i.e. People's public LinkedIn's are definitely closer to the truth than their private resumes). In the future, we'd love to have additional ways to validate case studies (i.e. endorsements from employers, connection to Stripe account for startups, etc.). 2. We do plagiarism checks on all submitted case studies. 3. Once a case study is scored, you can no longer edit (i.e. you can't first lie, get your score, and then change the case study back to the truth). 4. As far as ensuring quality reviews, we've created a system that's pretty good at detecting "bad actors" / "random reviewers" vs. well-intentioned reviewers. Also, built into the system is a ton of room for noise in the collective reviews process (i.e. even if some reviews aren't "good", scores still converge consistently to the same place). 5. There's no way to find and directly review a friend's case study (i.e. I can't just ask my friends to inflate my score). 6. The case studies, when being reviewed, are completely anonymous, so reviewers can only base judgments off of the work (and not background, gender, race, etc.) Would love to hear what you had in mind...
Guido Evertzen

I feel like there's not really a need for a tool like this.

Pros:

Nice interface... i guess?

Cons:

Can we all stop using percentiles to measure skills?

Max Deutsch
Hey Guido - Thanks for checking out Symbol. We do agree that arbitrarily using percentiles or star systems on resumes isn't useful, since these self-grades aren't in relation to anything (and so don't mean anything). As an experiment, we thought "What if we could create a system where you could see your percentile IN RELATION TO everyone else on the platform?". In other words, if you receive a score of 72 percentile in graphic design skills, this means you demonstrate more expertise in graphic design than 72% of those with that skill on the platform. If the platform gets larger enough, you can imagine that the percentile score represents your general ranking in the population (or at least a large enough pool for the scores to be useful). Not sure this is the right approach, but we do think it's interesting to consider how skills might be accurately measured...
Nicolas Raga
Beautifully designed product that actually solves a real world problem!! Can't wait until this gains more traction!!
Guido Evertzen
Why do people keep using percentiles to say something about their skills... 😑
Viktor Cherkaskyi
@guidooo percentiles are used to rank people's skills, not measure the quality or "usefulness". This ranking is just a statistical measure, which can be both accurate and dynamic, e.g. it can scale up with the increased community talent level. What would be an alternative? 5 golden stars ⭐? A thumbsup 👍? A number of seconds you keep mousedown on a 👏 button (medium)? There's been a substantial research revealing mental biases in people when rating something in e.g. 1-10 points or 1-5⭐, which wouldn't allow for fair assessment (sorry can't find the direct source, but it is mentioned in Criticism of NPS: https://en.wikipedia.org/wiki/Ne...). Basically when asked to rank something from 1 to 10, people would most of the time say "7". Another problem with fixed scores like 5⭐ is that you have to know your population to be able to assign a "fair" number of stars to each rank, i.e. 4⭐ when the rank is over 70%, but below 90%. But your population can change, and rarely you would have all the data beforehand (definitely, Symbol doesn't have it all from start). So leaving percentiles as they are (e.g. say 87 instead if 4⭐, say 69 or 58 instead of 5⭐) works the best in my opinion.
Pedro J. Martínez
@guidooo @viktor_cherkaskyi I think his point is not about the accuracy of the system but measuring skills itself. You can't measure skills, that's the thing. Just because A made X and B don't, doesn't mean in any way that A is better or is more skilled than B. Even worse, if B made Y how they score that the case study X is better than Y? Why? Even if X were better than Y, does that really mean that A is more skilled than B? Absolutely not. Again, you can't measure skills. Therefore, you can't rank people's skills.
Logan Boyd
@guidooo @viktor_cherkaskyi @inthe0n Exactly. One of the things we were taught in my design school was to never rank your skills by using an sort of % or grading method. Because of the simple fact that what if I rate my photoshop skills at 60% because I believe I have a lot of things left to learn (3d modeling in PS, gif creation in PS, etc) and then someone else comes along and puts a 90% on their resume and they actually have less technical knowledge than I do but they just believe that they understand the concepts that they need to know. Because everyone is going to rate their skills differently. It's hard to measure exactly what skills your looking for because skills can vary so much depending on a project. If I put 90% skill rating for Adobe photoshop on my resume then my new boss gives me a task that I fail at because I didn't do it well enough because it was something that was out of my "skill zone that I was rating myself at". Using % is a very tricky thing to do because everyone is going to make themselves look good even if the skills they are evaluating themselves on isn't the same as what someone is looking for in someone. -- Another example, I rate myself a 80% at photoshop. Someone comes to me and asks me to do a digital painting. I tell them I don't do paintings. Does that mean that I was lying about my 80% skill in PS. How do I evaluate myself. Its very hard because there are so many variances to look at in terms of trying to put an overall % on many skills.
Pedro J. Martínez
@guidooo @viktor_cherkaskyi @mastemine 💯. Dunning–Kruger effect. Quoting Dunning: "If you're incompetent, you can't know you're incompetent ... The skills you need to produce a right answer are exactly the skills you need to recognize what a right answer is."
Tom Uhlhorn
@guidooo @viktor_cherkaskyi @inthe0n @mastemine This is a fascinating discussion. I briefly mentioned the Awwwards system in another comment, would a weighted voting system where reviewers with more gravitas are given stronger weighting be a fairer assessment? Or is the concept of objectively rating one's skills too flawed as a concept?
Jordan Gonen
this is dope! nice nice
Usama Ejaz
This is super interesting.
Zhouchen Tang
keep it up folks! upvoted,