Hacker Newsnew | past | comments | ask | show | jobs | submit | nocipher's commentslogin

That sounds like Stokes' Theorem!


Grad school for math can be a bit scary. If you've been out of math for that long, you'll need to do some review. All of the people I know in my graduate program are experts at at a large swathe of undergraduate topics: calculus, differential equations, linear algebra, combinatorics, probability, etc. And of course, they know even more about the topics they really like. If you have the chops to do it, though, math only gets more interesting from there: Real Analysis, Topology, Modern Algebra, Galois Theory, Knot Theory, Convex Geometry...


Someone has to use Googlebot though. Google is therefore liable for what it does. Just because a machine or a piece of software is the main actor does not mean there is no culpability.

Google is trying to index as much of the web as possible so that it can provide a better service, attract more customers, and make more money from its advertisements. Therefore, the use of a program that scans the web and accesses documents which Google does not have permission to view makes Google responsible for routinely and intentionaly breaking the law.

Also, that your tool cannot filter content particularly well does not mean you get a free pass. It means you were negligent.

...or at least that is one way to interpret the law. The law is bad. You can't just say it's illegal when its illegal. It should be as cut-and-dry as possible (e.g. killing someone is always illegal, even when it is an accident a la accidental manslaughter).


It is still trespassing. It doesn't have to be difficult to do something for it to be illegal.

I think the bigger issue is that trespassing is usually handled with a warning and, if it does go to court, is usually a misdemeanor or a low-grade felony. For this, it seems that accessing a server that you do not have permission to use is seen as much more serious than merely trespassing.

I can't see an argument that supports that position. Certainly, if you do anything to vandalize or disrupt the server, then you have crossed a line and should be prosecuted. Anything less should be handled more reasonably.


Seeing stuff like this always makes me giddy. Why aren't Americans more involved in the demoscene? I always wanted to find a way into it, but never found people willing to give it a shot.


There are tons! And lots of parties as well (one coming up soon at MIT). It's almost entirely an East Coast phenomenon though.

Here's a FB page for them https://www.facebook.com/groups/NAScene/


I think it's a side-effect of things like the huge US startup scene. People are less inclined to pursue "academic" things like 64k demos, and more likely to try and launch a less technically impressive but more money-oriented startup.


I agree with this answer, but I suspect that this is also why Lisp isn't often a first choice. Meta-programming doesn't figure into language choice discussions as often as it probably should.

I distinctly recall wishing for macros in Java a few months ago while working on an Android project -- so much boiler plate for doing so little. I came to the realization that Java uses XML augmented libraries so much simply because Java boilerplate is such a pain to write. It's often easier to build a mini-language on top of XML and use Java to parse and compile it than it is to write the equivalent Java code. I think that alone says something about the efficacy of Java and the benefit of using a language with a more malleable AST.


I think the argument is valid. Even some libraries have movies and video games. I fail to see how downloading media has a fundamental different effect than merely borrowing it.

Even arguing that borrowing is temporary seems a poor argument at best. Most people only play games or watch movies a small number of times and then rarely pick them up again. And, even so, there is nothing stopping borrowing again.

If the library paradigm is okay, why is the downloading content paradigm not okay?


Because library shelves don't consist 90% of pop movies, video games, and music. Pirating the latest Halo game is not the same as borrowing a copy of Nineteen Eighty-Four.


Torrents don't consist 90% of those things either. The largest audio torrent on http://thepiratebay.se/browse/100/0/5 is a sample library for use in creating your own songs. The largest video torrent on http://thepiratebay.se/browse/200/0/5 is a collection of the X-Files TV show; #2 is a collection of 50 classic movies (A Clockwork Orange, American Beauty, Annie Hall, Apocalypse Now, and so on) --- the movie equivalent of Nineteen Eighty-Four. The largest software torrents on http://thepiratebay.se/browse/300/0/5 are chess tablebases --- used for research into the game of chess --- Microsoft Windows 7, and auto repair software called Alldata. The top "other" torrent on http://thepiratebay.se/browse/600/0/5 is a preservation copy of GeoCities, which contained the full weirdness of the late-90s web in miniature, and after a duplicate of the same torrent, #2 is an archive of chemical journals.

I think what you're probably thinking of is not library shelves but library checkouts. And I think that you'll find that library checkouts did consist 90% of trashy paperbacks, the text equivalent of pop movies, back when people got that kind of stuff from libraries instead of online.


Would you judge the contents of a library by the size of the books?


...yes? I mean, how else would you do it?


I think you don't understand what you're saying.

If you were to judge the contents of a library by the size of the largest items on the shelves (exactly as you have done with torrents), you would come away with the mistaken impression that they consisted primarily of dictionaries and boxed sets of language learning CDs. In fact, these items represent a very small portion of the items in the catalog.


I agree that it's a crude measure, but I don't think the situation is quite as bad as you make out; your intuition about paper libraries is misleading you.

What's being counted as a single item here is not a single bound volume of a chemistry journal, nor the entire archive of Bioconjugate Chemistry, but rather the entire chemistry-journals wing of the library: 539 gibibytes, including 226 different journals. By comparison, the latest five items on http://webcache.googleusercontent.com/search?q=cache:http://... are 3.7MiB, 11.7MiB, 350MiB, 730MiB, and 260MiB; the chemistry-journals library is some 2000 times the size of the median of these and 120 000 times the size of the smallest, which happens to be a two-volume book called "Great Moments in Mathematics".

It turns out that when you have a power-law distribution crossing five orders of magnitude, like the one that characterizes file sizes, rather than the much narrower distribution that characterizes book sizes, you actually can get a useful approximation of the makeup of the total by looking at the makeup of only the largest items. It's surely not an unbiased estimator, but it's still a useful one.

Feel free to invest the work to do a better approximation.


We do not agree. I am not saying it's a crude measure, I'm saying you're measuring the wrong thing. File size is the wrong thing to measure. It doesn't matter what estimate of file sizes you can come up with, because file size is the wrong thing to measure.

Unless one is loading a moving van or trying to estimate the number of shelves required to store it, one characterizes the contents of a library by the items in the catalog and their subject matter, not by the volume they consume. You don't go in and ask for "a cubic foot of books" any more than you torrent "a megabyte of music".


Libraries pay for their books/movies/music/whatever often many copies due to theft/damage/demand.


Often, the initial distributor of a torrent has purchased it as well. At least one person downloading or uploading a media file has usually paid for it. Your first point seems to not condemn pirating wholesale.

The latter is more interesting: does one need to be capable of suffering a loss of an object in order to share it? Does access have to have a bandwidth limit for sharing to be reasonable? An interesting example are ebooks at libraries. My university offers a number of these. They are not able to be stolen like regular books and they have no limit on simultaneous borrowers. Even if there were some artificially imposed limit, what purpose would it serve? Do extra restrictions on use make a product more valuable? Does it create more profit for content creators?


Often, the initial distributor of a torrent has purchased it as well.

Is this really the case. Based on my, admittedly very limited, outsider view of the scene most distributors get their copy via review copies sent to the press or it's an inside job by someone working for one of the companies in the production chain. Having a pirate copy up before something is available in the store is a big deal in the pirate scene and that rather precludes buying a copy.


Of course they aren't looking for originality. It can't be expected. When you give students a really difficult test, you don't expect everyone to make an "A". You expect some to fail ("D") and many to be just average ("C"). Some select few, however, will defy the norm and manage an "A".

The situation with assigned writing is the same. Some will elegantly write many droll, boring statements and back them up with some personal anecdotes or stories they came across while doing research. Those will stand out against the poorly written droll, boring statements. They'll get higher scores.

The few that break the mold and do something completely unexpected will definitely stand out. If they can back up their originality with half decent ability, they'll stand out even more than the "standard excellence". Those people will definitely get an A.

The conclusion is sound. If you beat the expectations people have of you, good things will likely happen.


An "idea guy" who also handles marketing and sales shouldn't be considered just an "idea guy".


I completely agree but I feel there is a bias on HN towards non-technical founders, certainly under these conditions. I think it's important to understand what traits of a non-tech co-founders are going to be more valuable / interesting to work with: I've seen friends get stuck in bad positions because the founder can't grow a business (I've been in this place myself when young).


The word-choice is really a minute issue. The more appalling one is that the "entrepreneur" potentially wins half the prize while his "resources" split the other half. This seems grossly unfair given that the entrepreneur doesn't seem to bring any value other than his/her idea to the event.


... the only difference listed between an entrepreneur and resource is that the former has an idea they want to try out. So I'd fully expect the entrepreneur to be a developer/designer/sales person who can contribute in significant ways.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: