Hacker Newsnew | past | comments | ask | show | jobs | submit | rnadna's commentslogin

I think the answer re tables is the meta-answer for linux generally: copy somebody else who has solved the problem. The Apple IOS-OSX dichotomy comes to mind as something to ape.


I fall into a similar misdirected-focus trap, but mine is simpler: I waste an embarrassing amount of time in editing the wrong damned file. After a sequence of small tweaks that yield no change in the results, I make a huge change and see nothing, and then realize that I've done it yet again. I need to write a vim macro that blanks the screen every few minutes, displaying the message "are you SURE this is the right file?"


Re "deprivation ... from everybody"...

I was told once that the typical paper in my field gets only two serious readers, beyond the reviewers. (The joke is that it gets only two, including three reviewers.)

Of course, it is impossible to know who has read a paper, and that may explain why I've never seen the number written down. Still, it's easy to count citations. In my field, strong papers get perhaps a dozen citations. My guess is that no more than a quarter of citations indicates a thorough reading, so we indeed get a readership that could be counted on one hand.

For an author, a big factor is page charges. A popular [society, noncommercial] journal that I use has a special deal for providing content to readers for free. It "only" costs 3K. At that rate, the one or two potential readers who lack a university subscription could just phone me and I could buy them a ticket to visit, where I could explain the work in person.


That's because that journal is doing it wrong and someone there is getting too much money. You give me 3k and I'll seed a torrent for one thousand papers as long as my Internet is billed per month instead of per GB. I just saw some redditors link a torrent with tens of gigabytes of technical books, so the method appears to work just fine. And if the author's intent is to distribute it wouldn't be violating anyone's copyright.


The journal owns the copyright, not the author.


Yes, exactly. I think people arguing for "freedom" here need to start listing what papers they want to read and why. Most research isn't exactly accessible even if you hand someone a copy of the paper.


I downloaded the new version, and tried it on a docx file that a colleague had sent me. In MSword, the file has 24 pages. In LibreOffice, it seems to have a bit over 5 pages. No error message, no dialog box ... it just gets to a certain string ($O_2$, expressed in latex format) and stops. If not for the fact that I have MSword, I'd email back to my colleague and ask her what the heck kind of a draft manuscript she was sending.

Although I use the Excel copy quite a lot (for grading), I have yet to see the Word copy function properly in a professional document of any realistic complexity.

I know, I should report the bug, but the material I'm looking at is under submission to a scientific journal, and will therefore be private until it may be published.


Security of data is a moral concern in some circles (e.g. academics outside the USA are quite wary of the Patriot Act) and a legal one in others (e.g. privacy of medical and financial information). Some jurisdictions have specific laws about certain types of data not being hosted across a border. These things may be disagreeable to some, but they certainly are not delusions.


Mathematica provides symbolic computation, R does not.


In case it's of any interest, below is code in the R language that produces time-series graphs for two provinces in Canada.

    fluPlot <- function(country="ca", regions="Nova.Scotia")
    {
        url <- sprintf("http://www.google.org/flutrends/intl/en_us/%s/data.txt", country)
        d <- read.csv(url, skip=11, header=TRUE)
        n <- length(regions)
        t <- as.POSIXct(d[["Date"]])
        for (i in 1:n) {
            if (i == 1) {
                plot(t, d[[regions[i]]], xlab="", ylab=regions[i], type='l', col=i)
                grid()
            } else {
                lines(t, d[[regions[i]]], col=i)
            }
        }
        legend("top", col=1:n, lwd=par('lwd'), legend=regions, bg='white')
    }
    par(mar=c(2, 2, 2, 2), mgp=c(2, 0.7, 0))
    fluPlot("ca", c("Ontario", "Nova.Scotia"))


Nice! I've adapted your code and made a version that uses ggplot2. It also uses memoise() so that each data set only needs to be downloaded once per R session.

    # Memoize read.csv so each data set only needs to be downloaded once
    require(memoise)
    readcsv <- memoise(read.csv)

    fluPlot2 <- function(country="ca", regions="Nova.Scotia") {
      require(ggplot2)
      require(reshape2)

      url <- sprintf("http://www.google.org/flutrends/intl/en_us/%s/data.txt", country)
      d <- readcsv(url, skip=11, header=TRUE)
      d$Date <- as.Date(d$Date)

      # Convert to long format
      dl <- melt(d, id.vars = "Date", variable.name = "region")

      # Select regions of interest
      dlsub <- subset(dl, region %in% regions)

      ggplot(dlsub, aes(x=Date, y=value, colour=region)) + geom_line() +
        theme_bw()
    }

    fluPlot2("us", c("Minnesota", "California"))


I made a year-by-year line overlay for the Japanese data.

You can probably plug in any other area's data. the JS is just in the index.html.

http://poyo.co/d3stuff/flu/


Me too. And I didn't realize that made me old, until scanning this thread. I guess time flies.


Not only is this a great thing (well, everything about github is pretty great) but now I know that the funny icon can be called "breadcrumb".


Fifteen years ago, people would have found this interesting.


Despite being a pythonista at heart and a rubyist at work, Perl is still my go-to language for when I notice bash scripts getting messy and growing out of scope and destined to enrich my personal long-term unix toolset.

Yes, Perl is still very relevant today, for me and for countless others[0]

[0]: https://github.com/languages


Out of interest, why do you go to Perl instead of Ruby?


1. It's everywhere. By that I mean my code will work everywhere, without much thinking about it being installed, versions, or packages, or whatnot.

2. Ruby is too 'core' and needs too much additionals gizmos to be efficient

3. Ruby is concise and conventional, while implicitness and terseness are idiomatic of Perl

4. I'm no Perl monk but things in my head translate to code structure more easily in Perl for this use case (unix tools, text/binary/file content stream manipulation)

5. Perl strict mode

6. Occasionally, Perl suid support

7. Bonus: performance under Windows, or rather lack thereof for Ruby

Mind you, Ruby is extremely apt and I enjoy working with it (especially - but not only - with Rails), but this is more a matter of using the right tool for each problem.


For myself, it is CPAN and documentation.


There's no need for snark, it adds nothing to the conversation and makes you seem like a bit of a dick.

There's a load of interesting stuff happening in Perl, I don't use the language myself but I'm interested in what they're up to.


15 years ago you could have replied to it on /. with "F1rst P0st" and been vaguely relevant.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: