One thing that's been key for me is namedtuple (in the collections module). It's immutable like a tuple, but the values can be accessed by name just as if they were object attributes built with the class keyword. It's great for creating generic functions (think Lisp and CLOS) instead of using Python's prototypical system. And since tuples can contain any objects and functions are objects, you can bind callables like lambdas to those names, which allows you to have 'methods' if you want as well.
That said, the standard library leaves something to be desired for the functional style. It makes sense for object-oriented programming to have non-mutating functions return the result and mutating functions return None (think sorted vs. sort), but this can get rather irritating when you're trying to write in a purely functional paradigm.
Also, am I forgetting my Python, or does it not have a great solution for appending to the front of a list? One thing I love about Lisp is that it's ridiculously easy to write (cons item some-list), or even (cons item1 (cons item2 some-list)). Doing the same thing in Python is irritating, because insert() doesn't return the result list.
...or maybe that's what I deserve for trying to write Lisp-like code in Python, anyway....!
EDIT: As noted in the comments, the '+' operator will suffice here. Though since lists are really arrays and not linked lists, this functional way of thinking will result in horribly inefficient CPython code.
Python lists are array-backed, Lisp lists are singly-linked lists. Appending to the front of an array list is O(n)--you don't want to do it. Appending to the back is constant time, but is an in-place operation and doesn't return anything, which is less than ideal for FP purposes.
Yes. That's the hole point of the article, and matches very well with my understanding of SICP: functional programming is possible only if you model changing state in streams, which are just called iterators in python. All the fuss about python missing some parts too be really functional seem to have missed the point to me.
And Lisps work internally by generating lists and evaluating those, but you don't see me complaining about macros!
In all seriousness, though, if I really wanted a functional implementation, I wouldn't be using Python. It's more just nice to know that I can create an object that's immutable and which functions more or less like a CLOS object, so that I can think in that mindset instead. Also, I believe it doesn't come with all of the hidden functions that it would otherwise.
In the end, if I really cared, I'd actually write a wrapper around the type() function.
See also http://bugs.python.org/issue3974, where someone submits a patch for namedtuple that doesn't use exec. The maintainers reject it as less clear and probably slower. I agree there is "nothing unholy in using exec", though I found this weird at first too.
So it is a doubly linked list, its name is deque, and its name is pronounced "deck". The levels of indirection here sound kind of like something Lewis Carroll might write.
'deque' is an irregular abbreviation for double-ended queue, and "deck" is a convenient metaphor for visualising the container - you can put things at either end of a deck of cards. c.f. std::deque<>
Maybe there's no way in the standard library, and it may be a valid point that it should be there, but you could trivially define your own 'cons' function and write cons(item, someList)
Hmm, I forgot that '+' was overloaded for lists in Python - nevermind that last statement, then.
The rest still holds, though - not all functions have a mutable and immutable counterpart (like sort) and map() returning lists makes them rather unwieldy. I think they fixed this in Python 3, though I'm not sure if it was backported to 2.7.
There's in-place .sort() and the functional sorted(), which has been there since at least 2.6. There's no in-place map, but it would be a one-line function (and you could use map as an in-place operator if you use a function for its side-effects). In the case of + there's the in-place counterpart += (same as extend for lists).
So I don't really get your concern, plus mutability is a property of data-structures, not of functions.
It's not a matter of in-place mapping, but if I remember correctly, map() returns a generator in Python3, which has the effect of simulating single-traversal and lazy evaluation like Haskell does.
> mutability is a property of data-structures, not of functions
True, but in a (purely) functional language, if all data structures are immutable and functions are simply mappings of input values to output values with no side-effects, that's inconsequential. We're talking about simulating a functional paradigm within a non-functional language with mutable data structures, so writing functions without side-effects would be more idiomatic - hence the problems with trying to write Python in a functional style.
If Python were a non-functional language, as you claim, it wouldn't have map reduce and functools. Python is not a purely functional language, it is a multi paradigm language. This difference is the main point here.
No, that would go against the philosophy of Python: only one [obvious] way to do it. Lambda, map, and reduce were all components that Guido very much wanted to remove, as he felt they had no place in Python.
Python is an object-oriented language, and the fact that it is a more purely object oriented language than (for example) Java allows one to use functions as first-class objects in ways that appear to be consistent with a functional paradigm, but that doesn't mean that the language supports a functional approach. That would require support by the interpreter for some key
Put another way, what's the difference between the following?
> [f(x) for x in xrange(10)]
> map(f, range(10))
Without knowing the difference in the underlying implementation, you can't really be sure - it's possible that map() is just a syntactic alternative to a list comprehension, and you could write an implementation of Python that did just that. If we're talking about CPython, though (which we are), then we have to know that the latter is using a function object that is retrieved once and called multiple times, and the former is using a function object that is retrieved ten times. In the latter case, we have a function object that contains a bound method which receives the implicit parameter 'self' and an integer within the range specified, and this method call is what is, in idiomatic Python, producing the values that are then used to create the list. Also, xrange is using a generator, which requires several function calls in itself, whereas range creates a list (which is handled differently).
That isn't necessarily so inconsistent with a functional paradigm, but my point is that the implementation is more important than the function names - simply saying that map() and reduce() exist doesn't mean that Python is a multiparadigm language. Python is incredibly eager-evaluating and it does not optimize tail recursion, two things which make it hard to argue that Python supports a truly functional style.
As noted by another commenter, this is not an accident; while you can fake functional paradigms syntactically with Python's extensive implementation of first-class objects (including functions), the explicit goal is to discourage use of recursion, as well as other functional ways of thinking, in favor of iteration, and other imperative/OO ways of thinking.
as has been pointed out, python isn't particularly great at doing hardcore functional programming due to lack of native persistant datastructures. however i've found it great for learning functional programming without having to get used to the syntax of real functional languages.
here are some different ways to implement functional sequence operations without native python syntax like `yield`:
I was at a Python user group meeting with Guido a few years back, and I asked him about his plans for functional programming support in Python. He responded that he thought functional programming was a lot less useful than all the hype would suggest, and that, far from planning more support for functional programming, he regretted having put in as much support as he already had. Ideally, he said, he would remove things like lambda instead of making them stronger.
Even things like tail call elimination, he said, would just encourage people to use more recursion, when iteration was easier to read, and in places where TCE would work, the programmer could obviously convert it to iteration himself and should do so for code clarity.
I like Python, and I like a lot of things about functional programming, so it wasn't the answer I was hoping for. Unless Guido has had a change of heart since then, I wouldn't pin my hopes on Python ever becoming a first-rate functional programming language. It's first rate at the things Guido values.
Python has added list and set comprehension syntax, which are just as functional as eg. map and filter. So I don't think it is functional constructs in general he is against, but rather specifically the lispy idioms with map, filter etc. with functions/lambdas as arguments.
Decorators are another example of functional programming added to Python, but with a pythonic syntax.
Python supports both imperative, OO and functional idioms, but generally only prefers one of them for a specific task. For example loops are preferred to recursive functions when appropriate. Objects are preferred to closures for mutable state, so mutation of captures variables has not been supported (support have been added recently with nonlocal, but it has clearly not been a priority).
Python is interesting because it is multiparadigm while trying to avoid fragmentation where you can solve the same problem in totally different styles.
I wish he would add TCE for the sole fact that parsing tree like structures feels much more natural using recursion than it does with a for loop. I am always paranoid that a really deep structure will cause an stack depth exceeded exception.
There's a quote from Guido about private attribute access where he said, "We're all adults", and that leaving the open is beneficial.
Recursion is another tool that he should let us adults use if it solves a problem better and TCE is lovely feature to have. Frankly, if you're using recursion, you either just learned it in a CompSci class or you know what you're doing. Either way, daddy Guido shouldn't tell us no :).
I don't know. It seems to me, from reading SICP, that functional programming doesn't mean lambdas or map reduce, it means pure functions as the main programming unit, instead of object or instruction or declaration. Then it is possible to write functional code in python without lambdas, map, using streams (iterators) as the state keeping data structure.
The problem, as always in these discussions, is that no-one wants to define the term "functional programming". Many people will be quite willing, though, to jump on each other for saying things that they believe flow from an incorrect definition of the term.
Anyway, at the very least there's two camps. On the one side, you have the hardcore Scheme folks whose conception of functional programming is "if I can't figure out how to write it with tail recursion it must not be computable". On the other side you have the Haskell folks whose conception of functional programming is "a monad is just a monoid in the category of endofunctors, what's the problem?"
Somewhere in between are people who actually get things done.
Same thing happened with OO, where people have been discussing whether C++ or JavaScript or Python or Smalltalk are "real OO", rather than discussing if the patterns they support are useful or not.
I don't remember where I read it, but "watching someone try to do functional programming in python can be painful". I have to agree quite a bit with that. The distinction between lambdas and functions, expressions, etc.
I started doing a lot of programming in CoffeeScript a few months ago, and between the JS underpinnings and the CS syntax you can do things that would just be painful in Python.
What do you mean with lack of persistent data-structures? There are immutable data-structures such as numbers, strings, tuples, and frozensets (unfortunately no frozen dicts) -- seems plenty to me. Or do you mean the technical meaning of persistent data-structures, as in one that gives access to all previous versions?
A good set of persistent data structures would give you all the generality, utility, and performance of Python data structures like lists and dicts and none of the mutability.
A tuple, behind the scenes, is just a type tag, a refcount, a length, and an array of PyObject pointers. A list, instead of having a fixed array of PyObject pointers, has a pointer to a dynamically resizable array of them. Tuples and lists are both iterable, they both have O(1) indexing and fairly efficient internal representations, but the key difference is mutability. A tuple is supposed to be a composite data type; a list is a dynamic array that you store things in.
Aside from mutability, though, tuples can do pretty much anything lists can, and usually a bit more efficiently.
Immutable lists can be preppended/popped in O(1)time, without affecting previous versions.
Immutable dicts can have stuff added and removed in O(log n) time, without affecting previous versions.
Scala's immutable vectors can have appending/prepending/concatenation/splitting/insertion/deletion in O(log n) time, without affecting previous versions. it's log base 32, so for any arrays using integer indexes, it'll never go above ~7, so it's basically constant.
Basically, you get (almost) the performance of mutable data structures, except you don't mess up the old versions of the thing every time you change something. It's really pretty incredible, the performance characteristics people manage to get with structural sharing: http://www.scala-lang.org/docu/files/collections-api/collect...
There's still a lookup hit of log-b32n versus contiguous memory immutable vector with O(1). Locality can also an issue with tree-like structures but muted with the large branch factor. Of course the flip side scala gives you the kitchen sink for persistance.
This is a useful reference but not really all that current. It dates back to 2006 - the functional constructs in Python now have evolved and changed over the years.
I doubt you'll see that - at least in CPython. Guido's been very clear about how he feels about implementing functional paradigms in Python, so it will probably never support functional programming in the standard implementation - just a bit in syntax/interface.
I honestly don't understand that. Data structures are just code ... isn't it much more work to change syntax and interfaces instead of adding some decent implementations of data structures to the library?
The syntax changes happened very much against Guido's will - he actively tried to remove lambda (and I believe map as well) for quite a while.
Anyone can write whatever Python libraries they want, but I doubt anything of the sort would be added to the standard library, and certainly nothing that would require an FP-aware interpreter to optimize in a useful way.
“… a persistent data structure is a data structure that always preserves the previous version of itself when it is modified. Such data structures are effectively immutable, as their operations do not (visibly) update the structure in-place, but instead always yield a new updated structure.”
There are situations where balanced search trees are appropriate. A general purpose priority queue (with delete and decrease-key operations); there's actually code for this in the _documentation_ of the heapq module; this seems really odd to me, why not just include it? It's also a shame that heapq is built on list instead of being a first-class data-structure, it feels bolted-on. Bitwise tries would be nice as well.
Even if heaps were a first class data-structures wouldn't the choice between trees and lists exist in the underlying implementation? Ultimately you would have to allow for both or deal with the strengths and weaknesses of that implementation.
As far as I know array-based heaps are most efficient with practical workloads (because it doesn't have the overhead & non-locality due to pointers for a tree structure). I don't think it's necessary to offer multiple implementations, just as there is only a single well tuned hash table in Python. What matters is that the functionality of an abstract data type is offered.
That said, the standard library leaves something to be desired for the functional style. It makes sense for object-oriented programming to have non-mutating functions return the result and mutating functions return None (think sorted vs. sort), but this can get rather irritating when you're trying to write in a purely functional paradigm.
Also, am I forgetting my Python, or does it not have a great solution for appending to the front of a list? One thing I love about Lisp is that it's ridiculously easy to write (cons item some-list), or even (cons item1 (cons item2 some-list)). Doing the same thing in Python is irritating, because insert() doesn't return the result list.
...or maybe that's what I deserve for trying to write Lisp-like code in Python, anyway....!
EDIT: As noted in the comments, the '+' operator will suffice here. Though since lists are really arrays and not linked lists, this functional way of thinking will result in horribly inefficient CPython code.