#WEEKLY 19 - Fibonacci Primes
Once again, I turned to my functional friend Haskell.
This program can calculate up to the 11th fairly quickly, it appears to halt after that. Might just need to wait more, or perhaps it's running out of resources?
Anyway, here it is! Enjoy!
@DynamicSquid It's pure. Type classes. User-defined operations. It's lazy. (and countless other stuff I won't list)
In more detail:
If you run a Haskell function with the same arguments twice, it is guaranteed to return the same result. That's because there is no mutable state in Haskell — being a purely functional language, it has no variables, so nothing can change from one function call to the next outside of the function's arguments. This allows the compiler to do massively aggressive optimizations (ie, save the value if a function is called twice with the same arguments, and use it in the other places it's called), so it's nice performance-wise.
"Wait!", you say, "What about a function like
putStrLn? You can't optimize away two calls to
putStrLn even though they have the same argument, since printing the same thing twice would be impossible!"
Well, you're right. The thing is, two calls to
putStrLn never have the same argument. There's this thing called the IO monad, which I won't go into detail about, but basically any string you pass
putStrLn is actually an IO String. The important thing about IO is that, for purity optimizations, no two IO values are equal (think of IO like a wrapper that makes two things, which normally might be considered the same, such as
"Hi!", and makes them two "different" things. This isn't really what the IO monad is, it's super complex, so this is an easy way to think about it which might lead to misunderstandings later on if you really want to dig deep into Haskell.) The same goes for things like
readLn, except it gets passed an "empty" IO (
IO ()). Monads are an extremely important concept in Haskell, since they allow you to do things which would normally be impure with a pure language.
So, you have the builtin
Int type which you know can have addition, subtraction, etc on it. You also have the builtin
Integral, etc, all of which can have those operations. Add to that any custom user-defined types (say,
Cat or whatever), and you have a steaming mess of types. The thing is, all these types have something in common (you can use a set of operations on them), so how can we organize these better? Well, we can use a type class. I won't go into them in detail here (you can look them up if you want), but basically it allows us to put all those types into a class,
Num, and then we can define some generic function
square that takes anything with the properties of
Type classes give you safety — in many languages, writing the above function and then giving it something of type
String would give a runtime error, but type classes allow Haskell to deduce the problem at compile-time.
This fits in nicely with type classes. You can define an infix operator, let's call it
$$, quite easily in Haskell:
Notice the parentheses around the
$$, which tell Haskell we're using some symbols as a function (operator) name. What's cool about this is that Haskell treats functions and operators almost exactly the same.
Consider the following Haskell code:
This takes the first five numbers of an infinite list, starting at 1 and counting up by 1, and prints them to the console. In any other language, it would be impossible to construct an infinite list. But in Haskell, it's possible because of laziness — nothing is evaluated until it needs to be. Since we never need to evaluate anything beyond the 5th term of that list (we only take 5 elements), nothing actually does get evaluated beyond the 5th term and so we don't melt our CPU trying to find values in an infinite list. This allows us to do fun things (the below is taken from this repl):
Here, we say that
fibs is a list of
Int's. It starts with [1, 1], and then we append each item of
fibs added to everything except the first item of
fibs. Now clearly, this creates a list has unbounded size — but that's OK since we don't actually get any of the elements yet. We'll only have issues if we try to get the last element in the list or the length of the list — and, since we don't do that in this program, it simply calculates as much of the list as needed. Laziness is kinda hard to wrap your head around at first (or at least it was for me), but it's super powerful and allows you to write really nice clean code.
I could go on and on, but I'll spare you :D
@HarperframeInc Not sure how this relates to my post (or even really to me in general... I don't really like C++, and to me there are a lot of languages better than C++), but yes I'm moderately fluent in Rust. I like it a lot, and it's usually pretty fast. However, the conception that "Rust is faster than C++" is not always true, it depends on the problem you're solving. I think Rust is good enough if you don't really care about squeezing every little bit of performance out of the program, but if you do then you'll probably want to go with something like C.
@fuzzyastrocat I'm starting to not like many compiled languages. C++'s standard library could use some improving, and it also just has too many features. Classes, templates, and namespaces in C would be nice. Java is just so hard to work with don't even get me started on that, and I haven't really tried any other language.
That's why I'm also learning other languages like Haskell to try and expand my view to other ideas. Also might make a compiled language similar to Night sometime soon...
@DynamicSquid Haskell is compiled, so it's probably my favorite compiled language outside of Eros and Curta (both are languages of my design).
By the way, I agree with you on all points there. C++ has potential, it just wasn't done quite right. And yeah, I wish C would turn into C++--, since it just needs a little bit more (all of which wouldn't increase runtime at all, only compile-time). And Java is... a mess. That's why people made things like Scala and Kotlin :D