Hacker Newsnew | past | comments | ask | show | jobs | submit | probablybroken's commentslogin

Personally, I find browsing amazon impossible simply due to the widespread mis-categorisation of products. It seems that suppliers just chuck everything into inappropriate categories in the hope that it gets them more views. This renders the platform useless for product discovery, and suggests that you might as well be searching for the product elsewhere. It also gives me the feeling of rummaging through a bargain basement shop, rather than the higher value store that I think Amazon used to be perceived as ( vs e.g, ebay ).


When I interviewed at Amazon a couple of years ago (for a team on the retail side of things, this was their bread-and-butter) I raised this very point with my interviewer, about my concerns over the data-quality issues I was having lately with item properties/attributes and miscategorization. My interviewer said the main problem was bad data coming from their Marketplace and other third-party vendors. - he said this is was why they recently (at the time) brought in the great minds and heavy-guns for machine-learning to fix all of these issues - and at the time (2016-2017) I noticed things were improving.

But lately - over the past year or so - I've seen things getting worse - I'm assuming because the problem of well-meaning but ultimately inadequate data from Marketplace sellers is now solved - but what remains is fighting bad data from intentionally incorrect metadata from malicious third-party sellers seeking to game the system - which we see today in spades with counterfeit sellers, for example.

Amazon's data-quality issue mirrors Google's perennial black-hat-SEO problem - I think it'll keep their teams occupied for decades at this rate.

(This is why I want to work on self-driving cars...)


> (This is why I want to work on self-driving cars...)

This is why I’m skeptical (as much as I’d like not to be) about self-driving cars. Black-hat SEO, intentionally incorrect metadata, it’s why we can’t have nice things and it will impact self-driving cars as well.


It's worth mentioning that the housings of many products are frequently sold to multiple manufacturers as components ( though in this case I suspect it's probably the same product ). I've also seen suppliers change the contents of batches of products once an order has been placed ( I'm talking palette loads of products that land with changes to the electronics - even as a single importer specialising in a given product it can be impossible to guarantee continuity of supply from the east ). This makes any attempt to police this kind of re-branding highly impractical since you need to disassemble any given product to verify it's contents, even within the same order.


My concern is why they need to sell them all. Take those 20 battery testers that all cost $7 and look identical, test them out, and sell the best one.

As the consumer I'll save time, receive a better product, and likely be more confident and satisfied with my purchase.


The only way for Amazon to do what you ask is to not allow third parties to list products in their system at all, and to verify continuity of supply by continually spot checking product, and penalising vendors for changing spec on them.


I think that's the point. Most retailers do exactly what you describe, and I prefer buying things at those stores instead of Amazon because of it. But that's just me.


That's really strange... I always found Excel to perform dreadfully on data sets with even just hundreds of thousands of rows - it was always easier to just load the data into a database, or manipulate it with a script.


That's because you don't know how to use Excel (this isn't a knock on you, no one ever tells people how to use Excel). Excel sheets have a hard row limit of around a million rows, but long before you approach that you will have moved your data into the "Data Model", which is a relational table engine stapled on to Excel, or to some other backend which you will query from Excel (often via the Data Model).


The cubic earth made me laugh; I'm currently making a planetary terrain generator which uses a normalised cube to generate spherical terrain, so this was particularly close to home :)


This exact theory made me utterly sick of math at a very early age. The response of the school was to assign increasingly more time to math bookwork, which I had completely stalled on. The more I was forced to do it, the more it made me sick, and the slower I progressed. For me at least, having a real application for knowledge is key.


Strange that the availability of developers who can actually be recruited for the given languages isn't listed as a factor.


He's a one-man-shop. Hiring doesn't matter under those circumstances.


Any programmer worth their salt should be able to adapt to a new language when they start at a new project/company. Hiring processes that target people who know language X are just dumb, except for very short-term/time sensitive work.


Seems like a lot of corporate environment has settled on:

"We hire programmers that can use myriad of different languages (except Lisp off course, no one want's to deal with Lisp) and then you also have to know Java because in fallback situation you'd be useless in their time constraint setting //goodbye" that kind of thing, did I get this wrong?


It's true that good developers can learn new things, but it can take years to learn the frameworks and ecosystems. I've seen projects fail due to a lack of skilled developers & having to either recruit contractors, or developers without the relevant background. Being a developer on one of these teams can be a real nightmare.


Sadly, hiring managers do not agree.


Could be that they have become less kind, or were less kind victims now seeking some sort of vengeance by participating in a study. It seems to me like it would be extremely hard to determine anything about a victim after the fact.


A possibility would be that you might want to make multiple changes in light level at random intervals to make it harder for e.g, a video being presented to the camera that matches the input it would expect.


That would make sense, certainly what I would call a further stage of development and currently they are doing viability and a basic level of operation. So that would seem like a logical progression.

Coz they could add a brainwave scanner that measures the reaction to the input of the eye, or have a higher resolution camera that measures blood flow being active in the eye. Many avenues of addressing this. But as a previous comment mentioned - generally easier to have a physical person next to the iris. After all, not like there is a shortage of humans to do jobs in the World today.


The LG iris scanner that I've used in the past had a small flickering red LED between your eyes. I'm not sure if it was just something to focus on or if the light was meant to make your pupils dilate slightly. The one I used was probably nearly 10 years old. The newer scanner out there look to be a lot faster and more automatic.


It's also sometimes possible to get some small contract work with the employer you are leaving, so long as you don't burn any bridges. ( At least in software dev. There are frequently systems no one else knows how to maintain etc. ) It might not be a very appealing idea, but a few weeks work of this type a year can go a long way to offset loss of savings.


You probably are rejecting potentially good candidates, but I would personally suggest that of the developers that I want to hire, most of them are members of the set of developers who know what the modulo operator is for. And since hiring is a numbers game, it seems like a reasonable filter. Besides, you are allowed to ask other questions in the interview.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: