Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But I nevertheless would argue that this

   (item for item in iterable if predicate(item))
is vastly superior idiom and the principles behind dynamically-typed languages, which makes this idiom possible, are sound.

   filter(P, []) -> [];
   filter(P, [H|T]) -> filter(P(H), H, P, T).

   filter(true, H, P, T) -> [H|filter(P, T)];
   filter(false, H, P, T) -> filter(P, T).
Look, ma, no IFs


>is vastly superior idiom

What makes it superior? Just that is appears more generic?

It actually isn't. As long as it's passed an object for iterable that's not iterable it will break. At runtime.

As long as it's passed an item inside an iterable that's not compatible with the testing predicate makes, it will fail. At run time.

The only reason it looks more generic is that it does LESS.


It not "appears", it is generic. As generic as it could be. This particular line is also a generator comprehension which produces a lazy sequence.

Run-time errors and so-called null-propagation are well-known issues and once code passed unit tests it is no less trustworthy in principle than statically-typed one. In the first run, yes, there is a chance of a run-time error.

The dynamic languages, notably Lisps, Erlang, Python and Ruby are proved to be superior for quick prototyping and so-called exploratory programming, which has been popularized by pg, along with bottom-up design and the layered-DSLs architecture, in the OnLisp book.

Another classic example is Norvig's Design Patterns in Dynamic Languages which basically ridiculed the whole thing.

There are distinct cultures around MIT Scheme and Common Lisp, Smalltalk and now Python which emphasize expressiveness, minimalism and readability. Such an erudite like you should know this.


>It not "appears", it is generic. As generic as it could be

Only in the sense that accepting arguments that it shouldn't accept and it'll crash with, is part of the general description of the filtering operation -- which is not.

If you prefer, it's "more generic" than it should be. It's not a generic description of the "filter" operation, but a generic description of the "process_input_and_produce_output" operation.

>The dynamic languages, notably Lisps, Erlang, Python and Ruby are proved to be superior for quick prototyping and so-called exploratory programming

Where is that proof published? And what methodology did it follow? Since, you know, we are computer SCIENTISTS and all...


> are proved to be superior for quick prototyping and so-called exploratory programming

Not OP and I don't have proof but I always reach for Python for exploratory programming or a quick "let's try and see what happens". Not sure if types are part of it or not, maybe it is just terseness -- the code is closer to pseudo-code.

I hope one day to learn Rust enough to internalize the type and borrowing system so that I can crank things out just as fast (and hey'd be more reliable and faster out of the door) but I am not there yet.

> Only in the sense that accepting arguments that it shouldn't accept and it'll crash with

Now on that point I think it is not just types. Types are a part of it. C has types and systems written in it crash and segfault all the time. Likewise C++ and so on. Maybe Rust is one new-comer where types and lifetimes would make a difference in practice.

However since OP mentioned Erlang, I'd would it is possible to write more reliable systems in Erlang (Elixir as well perhaps) than in C++ or Java or other such typed systems. I have seen it work in practice, and Ericsson's customers have seen systems work for years with 99.9999999% reliability. Now Erlang has optional types (the more annotations you add the more benefit you get from), but in practice isolated heaps, built-in distribution, solid error logging and reporting, sane concurrency model make a lot more difference.


Back when I was good at Haskell, I would actually use it for exploratory stuff. At the time I think it mostly came down to which standard library I knew better. Python has a huge stdlib, but I actually find prototyping difficult as I don't have my head in the game in terms of what function on X can I use for this purpose, what's the syntax for that again. Python has quite a lot of syntax / language-specific things to know when you think about it. And I've always found the docs hard to decipher.

When prototyping I think I actually want:

1. Little language-specific knowledge you have to learn and remember when you know a lot of other languages. A lot of this is about API consistency. You should be able to recognise a pattern in how the API is organised, and just go from there. Python still doesn't do list.sort(), it's still sorted(list). WHY???? This shit gets in the way every time.

2. Really REALLY good documentation. This is incredibly important. I should go from googling what I want to do or looking up a method to a concise description and an officially maintained usage example in under 10 seconds. Rust is nailing it most of the time here.

3. Types. Trust me to say that, but autocomplete is one hell of a drug, and so is using it to break out of the write-compile-test loop WAY earlier, usually during the write phase.

4. Not necessarily a big standard library, but at least a very good, frictionless package/dependencies system. Rust is getting better at this, the community is even thinking about 'blessed' packages.


> doesn't do list.sort(), it's still sorted(list). WHY????

Because sorted() is a non-destructive sorting - it returns a new list.


> but autocomplete is one hell of a drug,

Ah good point. I do remember that when doing C++ and Java.


> what methodology did it follow?

    http://erlang.org/download/armstrong_thesis_2003.pdf
    http://norvig.com/java-lisp.html
    http://arclanguage.org/
    http://old.ycombinator.com/viaweb/
Yes, 'proved' was a too strong claim. A few exceptionally good systems has been produced, notably Symbolics CL, Smalltalk, Erlang and Clojure, which after removing all the hype and snowflakery, is a remarkable thing.


No, in principle is exactly when it's a lot less trustworthy than the statically-typed one. Because in principle you could execute some code (untested; data-driven; "is_testing?function(x){return true;}:1.5"; etc.) that passes a non-function instead of a function, and this code would be considered valid (until it goes pop at runtime). But a language that checked this property statically would give an error before execution even started.


> But a language that checked this property statically would give an error before execution even started.

Like C or C++ does.

But you are right - I shouldn't use 'in principle'. It was rather a decorative idiom here, because it is true only for simple functions like map or filter or whatever you take from a standard prelude.

What I mean is that we are not talking about run-time in this context. Yes, one could pass by reference (or even by value) some crap at run-time, but this is a quite different problem. Machine code has this problem too.


>Like C or C++ does.

C is more dynamic than typed. Types are very loosely enforced there and carry little information with them.


This is more a comparison between Python and functional programming than dynamic vs static. This is partly because the language you compared it to is Erlang, which is a dynamically typed language, with no compile-time checking. But also because you can do this superior idiom in statically typed languages if they offer it, and you also get the static type checking.

In statically typed C# with LINQ, you can have, generic over T:

    IEnumerable<T> iterable = ...;
    var filtered = from item in iterable select item where Predicate(item);
This does not depend on the static typing. It's just syntax.

One non-obvious difference is that the type checker knows that 'filtered' must be an IEnumerable<T>, and you can't accidentally treat it as an IQueryable<T> or an IList<T>, which have subtly different behaviour. You can't, for instance, attempt to sort an IEnumerable without getting it ToList() from it. This is good, because enumerables can be lazy, and lists can't.

Python checks the enumerable/list distinction, but only at runtime. So let's say your Python library offered a method returning a list, and a new version of the library did some filtering before it returned the list.

A test for that functionality might pass independent of the distinction, because what test case attempts to sort a list for no reason? Some other part of your code used to work on the old version, but now breaks, and nobody knows until they run their program or write a really comprehensive test suite.

In comparison, in C#, this problem literally never arises, for one of two reasons:

1. The API initially offered only an IEnumerable<T> and consumers were calling .ToList() before sorting anyway (which is free on an IEnumerable that's actually an IList); or

2. The library author catches the error when it doesn't compile, because you can't implicitly downcast IEnumerable to IList.


Why you are ignoring the fact that one have to type so much less for the same result? ;)

Filter is the canonical example of a truly generic function. Suppose I wish to filter out something from a stream or a port. Then all I shall do is to supply a generic predicate, which only matches what it is supposed to be a predicate of and ignores everything else. This is what a predicate is - a matcher for some category.

So, quickly writing something like

   (filter this? (filter (lambda (x) ...) ...))
without even thinking about type signatures is what dynamic languages and type-tagging are all about - quick prototyping.

This is related to the embedding-of-DSLs technique and producing a systems which are layered DSLs instead on a common big ball of mud in Java.

Shall I cite from OnLisp or arc.arc and news.arc? ;)


> Why are you ignoring the fact that you have to type so much less for the same result?

Mostly because in your Python example you're really just _using_ an inbuilt filter function (comprehensions) where in Erlang you're implementing one. If you compare using, it's shorter in Erlang. More importantly, though, it's because I agree with /u/coldtea that one line of statically typed code (like the C# example) does more for you than the equivalent dynamically typed code. It's _not_ the same result.

> a matcher for some category

Well, a type is a category, which is why the study of them is called category theory. Having them helps you write predicates, and helps you avoid trying to match something of an unrecognised category. The type system does the job of 'ignore everything else'. If you're duck-typing and you actually want to handle that job (which most programs don't, they just accept they might get errors), you have to go

    filter(x => x.some_method && x.some_method(99), list).
Try stringing that together into (filter this? (filter that? list)). My point is, you can totally have that power and expressiveness without forgoing a static type system. LINQ does it, that's enough of an example on its own. Any language worth its salt these days can string together .map(f).filter(pred).reduce((a,b) => a + b) calls and still let you hover over 'b' in your IDE and tell you exactly what it is. You don't have to choose between these things. As for writing all this 'without even thinking about type signatures': I want to think about type signatures when working with complex data. Having the type system actually lets me be more productive, it is not a hindrance. IDEs are a part of that, but also expecting a huge block of code to work first time.

I replied because your comparison purported to demonstrate that only dynamic languages could be so expressive. It was a bad comparison, and also not a useful example of what code that begs for expressiveness looks like. In fact, most static languages allow you to be productive and expressive when you need it, and let you lock down the use of certain code to strict requirements when you want to.

I imagine there is not a thing in the world I could do to stop you from citing OnLisp.


> does more for you than the equivalent dynamically typed code. It's _not_ the same result.

Yes. It does more checking at the cost of imposing restrictions, such as forcing homogeneous containers and conditionals and having less generic, less cluttered code. This is not merely hand-waving, BTW. One just have to take a look at some decent Lisp code, such as arc.arc or some good parts of Common Lisp or Norvig's code from AIMA which is simply wonderful.

As for typing, an old-school ADTs are OK for me (that is constructors, selectors and predicates explicitly defined as procedures). This requires some discipline, because the type system does not do anything for you, but all this is trivial, including writing pattern-matching.

I could argue that strong typing via type-tagging of values (values has a type, not variables principle) is good-enough as long as it comes together with other Lisp's features, such as everything is an expression, everything is a first-class value, which gives one such beauties as the Numerical Tower, but this is quite another topic.

I am also OK with the SML-family languages, love Haskell for its clarity and conciseness and have nothing against them, but... I still think that there are prototyping languages and implementation languages, and I still prefer to prototype in a dynamic-typed language and would still use Common Lisp if I could have a choice or Clojure.


Well, typing less is not necessarily always the most important thing - though it certainly appears to be useful when writing small examples for pedagogical purposes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: