On Apr 14, 2:37 pm, "Andre P.S Duarte" wrote:
I started reading the beginning Python book. It is intended for people
who are starting out in the Python world. But it is really
complicated, because he tries to explain, then after a bad explanation
he puts out a bad example. I really recommend NOT reading the book.
For it will make you want not to continue in Python. This is just me
letting the air out of my lungs. No need to reply this is just a
recommendation. Txs for the opportunity .
My experience with technical books of all types is that often you'll
find some that don't work for you at all, while they'll be great for
other people. If it is the Apress volume you're talking of, I quite
like it because its more practical than the Learning Python book from
o'reilly. Although the one I preferred the most was the online text of
Dive into Python; http://diveintopython.org/.
I can see where you're
coming from though.
From http Sun Apr 15 09:51:57 2007
From: http (Paul Rubin)
Date: 15 Apr 2007 00:51:57 -0700
Subject: proposed PEP: iterator splicing
John Nagle <nagle at animats.com> writes:
Less clutter, and avoids yet another temp variable polluting the namespace.
Are we in danger of running out of temp variables?
There is unfortunately no way to contain the scope of a loop index to the
inside of the loop. Therefore introducing more useless loop indexes creates
more scorekeeping work and bug attractants. Better to get rid of them.
From http Sun Apr 15 09:55:09 2007
From: http (Paul Rubin)
Date: 15 Apr 2007 00:55:09 -0700
Subject: tuples, index method, Python's design
"Rhamphoryncus" <rhamph at gmail.com> writes:
Indexing cost, memory efficiency, and canonical representation: pick
two. You can't use a canonical representation (scalar values) without
some sort of costly search when indexing (O(log n) probably) or by
expanding to the worst-case size (UTF-32). Python has taken the
approach of always providing efficient indexing (O(1)), but you can
compile it with either UTF-16 (better memory efficiency) or UTF-32
I still don't get it. UTF-16 is just a data compression scheme, right?
I mean, s isn't the 17th character of the (unicode) string regardless
of which memory byte it happens to live at? It could be that that accessing
it takes more than constant time, but that's hidden by the implementation.
So where does the invariant c==s[s.index(c)] fail, assuming s contains c?