Here in this post, I would like to demonstrate how type signature in Haskell could help to understand functions & function compositions.

As a Haskell learner and lover, the most enjoyable part of programming in Haskell, is that whenever I wrote a function, I could always start with a type signature. For example, imaging I want to write a factory function which receives a input ternary function, outputs a function that flip the order of the function inputs. The description above is a little ambiguous, but if I write down the type signature, you could easily understand what I meant:

flip :: (a -> b -> c) -> (b -> a -> c)

As you can see, the flip function just receives a ternary function (a -> b -> c) as input, and outputs another ternary function: (b -> a -> c). The implementation of the function definition becomes clear just by reading the type signature!

More appealing example

Here is another more appealing example: suppose you have a function that just do nothing, then how could you build new nontrivial functions based on this trivial function?

The function I mentioned is the famous id function:

id :: a -> a

where id stands for identity. You could find this function in almost all algebra systems: the identity element in group theory, in category theory, in linear algebra and so on.

The id function looks trivial at first glance, it just do nothing! One may think that any function combined with id function will just be the origin function itself. The composition based on id function will result in no nontrivial new functions.

However, this is not true. The trick here is that id function actually provide one thing nontrivial: it binds the input type and output type to be the same type! They are both of type a. This makes a great difference.

The interesting part is that, using this type binding, you can build brand-new functions, with nontrivial behavior change!

For example, consider the flip function above, we could build a new function using id:

nflip = flip id

What does this function do? Let's firstly check its type signature:

nflip :: b -> (b -> c) -> c

Wow, what happened? We just see such an unexpected type signature. According to this type signature, what the nflip function does is, given an input of type b, and a function of b -> c, then return an value of type c. This is indeed an apply function:

apply :: b -> (b -> c) -> c
apply x f = f x

That is, apply the function f(the second input argument) onto the input x(the first input argument), and return the resulting value.

So how could we derive such a type signature? As we have mentioned, the id function bind the input and output to be the same type, then since flip function receives a function as its input argument, the evaluation of flip id should satisfy the type binding of both flip and id:

id :: a -> a 
flip :: (a -> b -> c) -> (b -> a -> c)

-- following the curry rule:
flip :: (a -> (b -> c)) -> (b -> a -> c)

-- in order to use id as an input to flip, we should have:
id :: a -> (b -> c)

-- then we have a = b -> c !!! 

flip id :: b -> a -> c 
-- a should be replace with b -> c 
flip id :: b -> (b -> c) -> c

We just infer all above result only using the type signature! That is actually where the power of type theory lies in.

More complex example

Let's discuss a more complex type inference example, a real world one. The inference happened in a famous Haskell package called hmatix. It is a Haskell binding to CLAPACK, with interface for common linear algebra functions used in most of the numerical applications. What we will talk about is the two function(operator) in its source code

infixr 1 #
(#) :: TransArray c => c -> (b -> IO r) -> Trans c b -> IO r
a # b = apply a b

(#!) :: (TransArray c, TransArray c1) => c1 -> c -> Trans c1 (Trans c (IO r)) -> IO r
a #! b = a # b # id

The infixr indicates that the operator # is an infix operator with right association order(1 means it is of rank 1 priority, ranging from 1 to 10, low to high). The apply function is defined in Matrix typeclass:

amat :: Storable a => Matrix a -> (f -> IO r) -> (CInt -> CInt -> CInt -> CInt -> Ptr a -> f) -> IO r
amat x f g = unsafeWith (xdat x) (f . g r c sr sc)
  where
    r  = fi (rows x)
    c  = fi (cols x)
    sr = fi (xRow x)
    sc = fi (xCol x)

instance Storable t => TransArray (Matrix t)
  where
    type TransRaw (Matrix t) b = CInt -> CInt -> Ptr t -> b
    type Trans (Matrix t) b    = CInt -> CInt -> CInt -> CInt -> Ptr t -> b
    apply = amat
    applyRaw = amatr

The implementation looks complex and full of details, so let's mainly focus on its type signature, discuss why the #! operator should have a type signature like this:

(#!) :: (TransArray c, TransArray c1) => c1 -> c -> Trans c1 (Trans c (IO r)) -> IO r

given the definition:

(#) :: TransArray c => c -> (b -> IO r) -> Trans c b -> IO r
a #! b = a # b # id

Aha, the id function again! So we just need to follow the deriving step in the previous section, and apply it in this more complex situation. It will turn out that indeed it is not that complex.

-- since we have
id :: a -> a 
-- and id is the second input argument of (#) in (b # id)
id :: IO r -> IO r

-- b # id will be 
b # id :: Trans c (IO r) -> IO r

-- suppose we have type bind a :: c1
-- a # (b # id) will be (the association rule is given in 
-- (infixr 1 #) definition)
a # b # id :: Trans c1 (Trans c (IO r)) -> IO r

-- that's it! the operator (#!) should have the signature
(#!) :: c1 -> c -> Trans c1 (Trans c (IO r)) -> IO r

As you have seen, inference of type signature make it easier for understanding the behavior of complex function compositions!

Reference

[1] https://github.com/haskell-numerics/hmatrix

[2] https://github.com/haskell-numerics/hmatrix/blob/master/packages/base/src/Internal/Matrix.hs






CC BY-SA 4.0 Septimia Zenobia. Last modified: May 28, 2024. Website built with Franklin.jl and the Julia programming language.