Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
An Introduction to Type Level Programming in Haskell (rebeccaskinner.net)
184 points by rebeccaskinner on Sept 9, 2021 | hide | past | favorite | 79 comments


I am not a haskell pogrammer but reading about monads it seems much ado about nothing. It seems that one does not need to understand category theory to understands monads. Basically, I would say, in very basic language, that a monad is a thing that swallows values so you cannot get them back. You can have one function that produces a value that is swallowed by the monad. Then you can compose these kinds of functions one after another and have all values swallowed by the monad. The use is e.g., to do output (IO monad) or do error handling (maybe monad). If you could get back your values from the output the output could be used to store state and functional programmers hate that so the output is a monad and cannot be used to get the state back.


I had a dozen people try to describe Monads to me, many of them Haskell programmers.

The one that finally made it click, was from a very senior Haskell dev who said:

  "People overcomplicate things. "Functor" is a fancy word for an interface that has the .map() method, and then "Monad" is a fancy word for a interface that has ".map" and ".flatMap" both"
Or something like that, I think. This probably isn't totally right, but IIRC it's something like this:

  interface Functor<A> {
    map: <B>(
      transform: (a: A) => B,
      a: A,
    ) => B[]
  }


  interface Monad<A> extends Functor<A> {
    flatMap: <B>(
      transform: (a: A) => B,
      a: A,
    ) => B
  }
Which helps me to think about it and understand it. None of the other explanations made any sense.


The explanation is kind of right but the code isn't.

The map function needs to return a Functor<B> and flatMap takes a function A => Monad<B> and returns a Monad<B>.

That's actually not 100% correct either as those are in fact higher kinds, so it's more like you have a generic F<_> that can be a functor or a monad or whatever, and the signature for functor map actually returns a F<B>, and flatMap takes a A => F<B>, returning a F<B>.

It's quite easy to grok looking at Scala.


As the sibling says, your code's a little off; but the approach of "they're just an interface" is the way I think of them too.

We shouldn't forget the associated "functor laws" and "monad laws", but they're pretty intuitive and not much different than more mainstream rules; e.g. a 'Serialize' interface might have a "law" like 'Serialize.load(Serialize.save(x)) == x'


It depends on what you mean by "swallows values". If I have an IO<T>, I can very easily get at the T that's inside by flatMapping over it. It's true that there's no way to turn an IO<T> straight into a T -- whatever you do with the T must always get wrapped back in IO<-> at the end of the day -- but it's not gone, and indeed we do use monads to store state. (There's literally a monad called `State`, actually.)

What monads don't let you get at -- at least, not without more knowledge about which monad you're using -- is whatever other information given alongside the Ts it contains. If I've got a random M<T>, and I don't know that M is Option, I can only operate over all the Ts (the either zero or one of them) it contains, but I can't tell whether it has any Ts in the first place.


Monads are a unification of a bunch of computer stuff, including sequencing, IO, non-determinism, state, concurrency and exceptions. When I say "unification", I mean it in the sense that Newton's theory of gravity unified the motion of the planets with a falling apple (previously they were considered separate phenomena), or how Maxwell's equations unified electricity and magnetism. Once you grok monads you realise that all those different things I listed are just special cases of one overarching theory.


The word itself comes from Category Theory. That's why a comprehensive understanding requires it, since the implementation of the concept in programming derives from the definition in CT.

The difficulty of understanding the concept is blurred because both the implementation and the mathematical object are called "monad", but they are technically different. The implementation of it could be simplified as many do, but it still doesn't explain the actual math object's definition. The implementation is not as abstract as the math object, which means it is also not as powerful, so it becomes a limited understanding by definition.

From my perspective, CT is the math of abstract composition, and is foundational mathematics from which other maths can be derived. This highly abstract nature of it, makes it hard to "simplify" further with an explanation. Monad is already by definition as simple as possible, and it's confusing that its implementations in programming get the same name (although I'm not sure naming it differently would help either).


The best explanation of functional concepts is (rather unsurprisingly) done using Javascript. See Functional Programming Jargon: https://github.com/hemanth/functional-programming-jargon#mon...

Purists say that it's not correct (monadic laws and all that), but it's a vastly more approachable explanation than literally any monad tutorial.


My few word explanation: Monad is a wrapper for effectful impure functions. It swallow the effect (not value) and assure a defined value type to be available for further computing.


back in the days, I had this infatuations with the idea of having everything checked at compile time by using type system. and I see many people following this kind of path with type system of Haskell, rust, and maybe typescript. granted, it feels good, at first.

I don't do that any more. simply because I'm very lazy and also in a lot of cases, those types that I wrote will be replaced by more dynamic representation (e.g. strings) at some point.


> I don't do that any more. simply because I'm very lazy

I'm lazy too, but that's exactly why I use static types. So that when I refactor code, I can let the type checker tell me all the places that need to be updated instead of trying to piece that together from test failures (and praying that the tests didn't miss anything).


yeah, but my point is that checking system that you had created will be thrown away at some point in my experience. so why bother writing types? that is my attitude at least at very early stage of a project.

later in a project where you know for sure something can be known at compile time, of course I love to check them at compile time.


My experience is that all my code gets thrown away or rewritten at some point. I've seen what code that doesn't get rewritten looks like; I'm intent on never inflicting anything like that on the world.

As far as types go, I've found that types actually help more during prototyping than later on. (At least for code I've written myself; types are great for helping me navigate other people's long-term codebases!)

Types give me a way to sketch out and iterate on an overall design without needing to implement anywhere near all the logic I'd need; then, when I start in on the logic, it naturally lays down on the skeleton the types provide. Compared to my experience with Python—where I still use some types!—I've found it easier to start and quickly try different designs in Haskell.


>My experience is that all my code gets thrown away or rewritten at some point. I've seen what code that doesn't get rewritten looks like; I'm intent on never inflicting anything like that on the world.

I believe most code is old. You mostly notice code that changes.


good point. Yes, during prototype phase, rigid typing actually helps quite a bit. I still do that myself, simply because I don't have to "run" the program to see if I'm on the right path when I'm just sketching things out.

I guess what I wanted to say is that, in the long run, compile-time checks may become runtime checks, especially something like enum values where at the beginning, enums are fine choice but soon you will find yourself where you have to store that to a DB etc.

Problem is, moving from compile-time check to runtime check isn't that straightforward in a lot of cases.


> so why bother writing types?

Types are the cheapest semantic documentation you can write, and your compiler/type checker can provide additional guarantees based on them.

Not only that, they're notes to future contributors/yourself about how a program works, so they/you don't have to reverse engineer code that was written a while ago in order to modify it with confidence.


> Types are the cheapest semantic documentation you can write

They are not.

Unfortunately, especially Haskell world seems to think they replace documentation, that's why so much of "documentation" for a lot of Haskell libs are just a dump of types with no explanation of what they mean, how they interact, what the functions using them do, or how they can be used.

> they're notes to future contributors/yourself about how a program works

They are not. Types do not describe how a program works.

> so they/you don't have to reverse engineer code that was written a while ago in order to modify it with confidence.

Yes, you will have to reverse engineer code that was written a while ago. Because types only describe, well, types. You code contains logic. And logic is the hardest part to understand.

Personal anecdote: worked on a system that was transitioning from original ad-hoc implementation to a better designed one. Some functions would accept a Person. Others would accept a Contact. Why? How to convert between the two? What are the differences? What was behind the decision? Why did `is_empty(new_contact())` returned `false`? And so on.

Thank god it had types, right? No need to reverse engineer.


Does your argument extend to automated tests? If not, why not?

I see types as mere tests. More robust, yet more limited in scope. FWIW it's much easier to see them that way once you introduce dependent types into the discussion, but it applies to the simplest type systems just as well.


Types are tests. On top of being unit tests that are forcibly run, they are formal and apply everywhere a datum is used.


sort of. I don't write tests at early stages, do you?

I don't see types as serious tests and I don't think they are robust. let's say that some integer must be between 10 and 100, do you use type checks for this?


> I don't write tests at early stages, do you?

I definitely do. In my experience, it's much easier to adopt a discipline of testing (and static typing) early on than it is to try to retroactively add that to an existing system, which may or may not be written in a way that is even testable.

But I do appreciate that viewpoints can differ on this topic. Regarding types, I studied type theory academically, so types are natural to me and don't really add any extra cognitive work (and perhaps they eliminate some). So I might as well use and benefit from them if they basically cost me nothing. But for someone who thinks of static typing as just trying to make the compiler happy (perhaps because they don't really understand the type system or because the type system is not ergonomic), I can see why they might have a more pessimistic view of it.


> let's say that some integer must be between 10 and 100, do you use type checks for this?

Yep. In particular, I would use:

- A "wrapper" type around an unsigned byte (we don't need negatives, or a whole machine word)

- A "newtype" feature, to replace the wrapper with a Byte after type-checking (Haskell calls this "newtype"; Scala calls this "opaque type aliases").

- A private/unexported/scoped constructor, to prevent arbitrary Byte values getting wrapped

- A "smart constructor" which checks the bounds of a given Byte, returning a 'Maybe MyBoundedIntType' or some other type-checked error mechanism (Scala's 'Try[MyBoundedIntType]' works well).

- Polymorphism/overloading to call that smart constructor of various numeric types (char, int, long, signed, unsigned, etc.)

In Scala that would look something like:

    opaque type MyBoundedIntType = Char

    object MyBoundedIntType {
      def apply(c: Char): Try[MyBoundedIntType] =
        if (c >= 10 && c <= 100)
          Success(c)
        else
          Failure(new IllegalArgumentException(s"Value ${c.toInt} outside range [10, 100]"))

      def apply(i: Int ): Try[MyBoundedIntType] = Try(i.toChar).flatMap(MyBoundedIntType(_))
      def apply(l: Long): Try[MyBoundedIntType] = Try(l.toChar).flatMap(MyBoundedIntType(_))
    }
In Haskell:

    module MyModule (MyBoundedIntType(), toByte, MakeBounded(..)) where

    newtype MyBoundedIntType = MBIT { toByte :: Word8 }

    class MakeBounded t where
      mkBounded :: t -> Either String MyBoundedIntType

    instance MakeBounded Word8 where
      mkBounded b | b >= 10 && b <= 100 = Right (MBIT b)
      mkBounded b | otherwise           = Left ("Value " ++ show b ++ " not in range [10, 100]")

    instance MakeBounded Int where
      mkBounded i = toWord8 i >>= mkBounded

    instance MakeBounded Integer where
      mkBounded i = toInt i >>= mkBounded


You can with Ada. Though not many other languages.


You can do it in most languages by just using a wrapper type with fallible constructors.

A lot more awkward than dependent types, but no popular language has those.


> let's say that some integer must be between 10 and 100, do you use type checks for this?

Yes. I explicitly mentioned dependent types for this reason.

This is also expressable in a more limited fashion in a language like TypeScript, although some may argue it also employs a form of dependent typing.


If you have types it is easy to refactor the code. So the throwing away bit is actually easier!


Throwing away is easy but remember, you have to do the runtime check instead of compile time check, and that is not so easy in a lot of cases


Lazyness as the motivation for creating machines/typecheckers to do the boring work.


Lazyness is actually a great feature of Haskell.


SPJ begs to differ, he thinks it was a mistake.


I'm still in the 'feels good' stage with meticulously modelling the types of data I use in Angular with Typescript.

Initially, I couldn't really see the point, but if anything I find it helps with catching errors and response expectations before build-time and also auto-completion/pseudo-documentation/hinting in VSCode.

Now, I love it.


> in a lot of cases, those types that I wrote will be replaced by more dynamic representation (e.g. strings) at some point. I would love to see a type system that lets me subtype string, restrict the domain, and automatically (with some hints from the programmer) get functions on my type based on string functions the compiler can prove would result in my type.

I've only seen type systems that work like this on numbers, and usually only a very few integers at that.


The formal notion of subtype seems to be getting some solid research attention; see e.g. https://drops.dagstuhl.de/opus/volltexte/2021/13888/pdf/LIPI...

Hopefully we see these ideas make their way into functional languages soon.


That laziness isn't so smart when it comes time to write all the tests to check there won't be runtime exceptions due to treating values as the wrong type.


In TypeScript you can define your discriminated union with strings, they call it literal types:

  type Foo = "bar" | "baz" | "qux"


> Posted on August 25, 2021 by Rebecca Skinner

> This blog post is a long-form article based on a talk I delivered at the haskell.love conference on 10 Sept 2021

Today, as far as I can tell, is Sept 9, 2021. Hello traveller from the future. Is time travel possible with Haskell?



You have forgotten about time zones. 10 Sep here, at about 8 am when you wrote that comment. Not sure "where" that conference is though.


I'd guess that the article was written after the talk was prepared but before it was delivered, and then it was updated after the talk was delivered.


This is pretty much true- except the talk is actually in about 13 hours for now. I wanted to have the article ready as a companion piece as soon as the talk was done and opted to post it a couple of days early so that I wouldn’t need to worry about it and could enjoy the rest of the conference.


Rebecca, I enjoyed the article. Probably should’ve provided a less trite comment - but I figured this was the obvious. Sorry for forcing you to chime in on this, of all things.

Good luck on the talk and enjoy the conf.


Today, as far as I can tell, is Sept 9, 2021.

Depends on the country.


This is extraordinarily complex and admittedly contrived. In the end, it doesn’t do anything to help you pick good colors for your theme.

It seems like support for live edits, so you can edit the theme with immediate feedback, is far more important than this sort of compile-time checking?


> It seems like support for live edits, so you can edit the theme with immediate feedback, is far more important than this sort of compile-time checking?

Maybe. But this talk wasn't about the best way to pool a colour picker, I assume, but just used that one as an example for showing some programming techniques.


Yes, I get that, but it would be nice to have a practical, non-contrived example, if one can be found.


The article gives the following motivating example

    data AliceBlue = AliceBlue

    instance IsColor AliceBlue where
      toRGB = const $ RGB 0xF0 0xF8 0xFF
My question is: can't we do away with the IsColor class and write:

    aliceBlue = RGB 0xF0 0xF8 0xFF
I suppose we could imagine writing things that aren't RGB values, like CMY? Then my question is: how can we decide between writing a conversion layer (e.g. no type class, and past some boundary we deal with RGB values only, and the caller is expected to convert it) vs using the typeclass everywhere. I would tend to opt for the former but I'm curious to hear about the tradeoffs.


Well that section she explicitly states this class allows for the abstraction over the color encoding (RGB vs CMYK, etc).


I understand that, but at some point someone will call toRGB. It seems like for any function that calls toRGB, we have a choice of using the existential type for this e.g.

    someFunc :: SomeColor -> ...
    someFunc (SomeColor color) = ... (toRGB color) ...
vs having these functions accept an RGB instead of a SomeColor e.g.

    someFunc :: RGB -> ...
    someFunc color = ... color ...
Then of course the caller would have to do the conversion

    someFunc (CMYToRGB color)


It's likely that in this simple example the practical differences may be nonexistent. But if you think about the types represented and what they mean, RGB is not a color, but an encoding of a color. So if some function is supposed to take a color and RGB is used as a proxy for a color, you may very well be able to skip the abstraction but it would need to be refactored if you ever evolve into using the concept of a color and its presentation form in different ways.

The class method seems to also allow CMYK and RGB to be defined completely independently of one another for any given color type. Whether that's necessary for the use case is beyond my understanding, but it allows that.

I'd also imagine that giving colors their own types means that in the domain there's a small set of colors that are important, so a type class around this emphasizes the behaviors of these important colors rather than modeling a color system in general.

Let's say one day RGB is superseded... while you can convert RGB values to any new value, you end up with all these functions that want to take a color, but are taking an RGB value instead.

I do think that depending on the problem domain your method of treating colors as directly encoded values is valid, but maybe for a more robust color specific model than a color scheme application.

I think a good analogy would be how to model a generic currency system, rather than the pricing plans of a SaaS app.

The SaaS app will have some

  data StartupTier = StartupTier
  data ProTier = ProTier
  data EnterpriseTier = EnterpriseTier

  class IsPrice price where...

  instance IsPriceTier StartupTier where
    toUSD = const $ USD 5
Here it's not so much the currency system that we focus on than some discrete prices that we care about, that just so happen to convert to currencies. It also gives us the benefit of having tighter control on the different prices in different currencies rather than using a conversion function.

Also that toRGB function has signature a -> RGB so it allows for a variable to perhaps dynamically calculate what RGB value it becomes... such as creating an instance of IsColor for something like "FavoriteColor" which could be pulled from the database.


One of the reasons we might want to pick an existential type is that it lets us (or the users of our library) compose a theme using a number of different colors of different types.

In your example, you have two different calls to `someFunc` that are each eventually converting two different colors, so you might say something like:

  let
    somethingRed = someFunc (RGB 0xff 0x0 0x0)
    somethingBlack = someFunc (cmykToRGB $ CMYK 0x00 0x00 0x00 0xff)
  in (somethingRed, somethingBlack)
In this sort of situation there's not a strong difference between the two. There are a few other situations that, if we expect to encounter them, tip us in favor of considering existentials though. The big one from an API design standpoint is that your approach makes it a bit more work to allow the user to mix-and-match colors from different encodings.

If you think about the `ThemeInstance` in the article, it's a `Map String SomeColor`. If you wanted to eliminate the existential type you'd probably end up with something like:

  newtype ThemeInstance theme encoding = ThemeInstance
    { getThemeInstance :: Map String encoding }
So you can have a `ThemeInstance RGB` or a `ThemeInstance CYMK`, but the user isn't allowed to freely mix-and-match.

Of course, you could use an ADT rather than an existential type here:

    data SomeColor = SomeRGBColor RGB | SomeCMYKColor CMYK
    myThemeInstance = ThemeInstance $ fromList 
      [ ("red", SomeRGBColor (RGB 255 0 0))
      , ("black", SomeCMYKColor (CMYK 0 0 0 255)) ]
Now you can case match against the constructors in `SomeColor` if you want:

  myFunc someColor = case someColor of
    SomeRGBColor rgb -> someFunc rgb
    SomeCMYKColor cmyk -> someFun (cmykToRGB cmyk)
But if this is library code, you've now created a closed encoding- meaning that a user who wants to work with YUV or HSL or some esoteric encoding can't do it.

At the end of the day, it is a design decision. If you opt not to go with existentials and typeclasses you'll end up with a different sort of API with different benefits and limitations.


Ah, that all makes sense. Thanks for the answer, and great article!


I suspect the answer is related to the Expression Problem.

https://en.wikipedia.org/wiki/Expression_problem


The best non-conventional introduction: https://aphyr.com/posts/342-typing-the-technical-interview


> “Can I use any language?”

> “Sure.”

> Move quickly, before he realizes his mistake.

This part always gets me.


> Criss is still here, at least physically.

Got me.


This uses the older/classic haskell 98 features to encode the types.

It works, but with modern haskell, one can use techniques that are less verbose.


Seems to me that Lisp weenies have been outsmugged by Haskell weenies.


Read the next few ones, Lisp comes back strong.


Thanks for mentioning this. Both stories are lovely, some of the most enjoyable and intellectually satisfying writing I've read this month.

On that note, I keep dreaming that one day, someone will make a Lisp with all the code generation goodies and Haskell-grade type system. That would be my dream language to code in.

(As it is right now, I mostly do C++17 in overtly-type-safe style at work, and typed Common Lisp on the side...)



That may be the greatest thing I've ever read.


Despite the current https certificate issue, this is also a great introduction to Haskell.

http://learnyouahaskell.com/chapters


+1 for the link, but I don't think type level programming should be an introduction to Haskell at all!


So, your programming with types. Do you also write tests with types? ;)

In before "types check stuff". If you code some convoluted logic with types (and even non-convoluted logic, too), you still need to test that you've coded that logic correctly.


I once coded red-black-trees in Haskell at the type level and yes, I had to write some type-level tests.

In a real dependently-typed language the friction would be lower, because it's easier to put "normal code" at the type level.

Perhaps I could have skipped some type-level tests by introducing more kind-safety...


People do this actually. There's even best-in-class-level support for this in Haskell and Scala already:

https://hackage.haskell.org/package/should-not-typecheck

https://www.scalatest.org/scaladoc/3.2.0/org/scalatest/match...


These two check that something doesn't type check. They don't really check the logic you've coded in your types (and which will type check).

For example, a real-world situation from Turkey (looking at my check from Marks and Specner): goods from category 1 (clothes) have VAT of 8%, from category 2 have VAT of 18 % (plastic bag), and that if a person buys 2 items they get a 15% discount, and if they buy 3 items they get a 25% discount, and there are additional discounts on various goods.

This will type check even if you code it as "if there is 1 item, do a 40% discount, and assign VAT of 2%".


You can test for things type checking just fine without special stuff. In Haskell

    type Test1 = TypeExpressionImTesting ~ ExpectedType
Thus, you can also easily check type-level computation as well.

A concrete example

    type AddTest = 2 + 2 ~ 4
^ This is valid Haskell and will confirm the addition function does what you expect


Once again, that's not what I asking or talking about.

If you write 2 + 2 ~ 5, all types will be correct (Int), but the function will be incorrect.

So the question is: now you've coded significantly more difficult logic than adding two Ints together with types. How will you test that logic?


If you write 2 + 2 ~ 5, your build will fail because your assertion failed.

You can test more complicated logic the same way, using ~ as a type-level "shouldBe" that allows you to write type-level unit tests for said complicated logic.


> If you write 2 + 2 ~ 5, your build will fail because your assertion failed.

So ~ tests not only the expected type, but the expected return value, too?


Okay, I'll answer my own question: ~ is type equality.

"A type context can include equality constraints of the form t1 ~ t2, which denote that the types t1 and t2 need to be the same." [1]

So. My question remains. What's to stop me from coding invalid logic in my types? How do I test the logic?

BTW, can't verify that "you can also easily check type-level computation as well." The example fails here https://replit.com/languages/haskell

  {-# LANGUAGE DataKinds #-}
  {-# LANGUAGE TypeOperators #-}

  type AddTest = 2 + 2 ~ 4

  main.hs:4:18: error: Not in scope: type constructor or class `+'
    |
  4 | type AddTest = 2 + 2 ~ 4
    |                   ^
  exit status 1
[1] https://downloads.haskell.org/~ghc/7.6.3/docs/html/users_gui...


> My question remains. What's to stop me from coding invalid logic in my types? How do I test the logic?

You test the logic the same way people have tested term-level logic for years. You say "this call to my function with these args results in this output."

At the term-level, you typically use equality to implement assert.

~ gives you a type-level "assertEqual" with which to write the exact same sort of unit tests you can write at the term level.

--

For the examples, you need to import GHC.TypeLits. It has the type family "+"


> You test the logic the same way people have tested term-level logic for years. You say "this call to my function with these args results in this output."

What is the value of adding this to types then? Besides increasing compilation time?

> For the examples, you need

  {-# LANGUAGE DataKinds #-}
  {-# LANGUAGE TypeOperators #-}
  import GHC.TypeLits;
  
  type AddTest = 2 + 2 ~ 4

  main.hs:5:20: error:
    * Expected kind `Nat', but `2 ~ 4' has kind `Constraint'
    * In the second argument of `(+)', namely `2 ~ 4'
      In the type `2 + 2 ~ 4'
      In the type declaration for `AddTest'
    |
  5 | type AddTest = 2 + 2 ~ 4
    |                    ^^^^^
  exit status 1
So far "easily check type-level computation as well" fails to be easy.


Okay, I finally made it compile. Let's revisit:

> whateveracct: A concrete example

> whateveracct: type AddTest = 2 + 2 ~ 4

> dmitriid: If you write 2 + 2 ~ 5, all types will be correct (Int), but the function will be incorrect.

> whateveracct: If you write 2 + 2 ~ 5, your build will fail because your assertion failed.

Just pasted this into https://replit.com/languages/haskell, and of course it passed

  {-# LANGUAGE DataKinds #-}
  {-# LANGUAGE TypeOperators, ConstraintKinds, TypeFamilies #-}
  import GHC.TypeLits;
  
  type AddTest = (2 + 2) ~ 5

  main = putStrLn "Hello, World!"


You haven't tested the type you wrote. `type AddTest = (2 + 2) ~ 5` compiles without error the same way `addTest = (2 + 2) == 5` would run without error.

To test it, try, for example

    {-# LANGUAGE DataKinds #-}
    {-# LANGUAGE TypeOperators, ConstraintKinds, TypeFamilies #-}
    {-# LANGUAGE RankNTypes #-}
    import GHC.TypeLits
     
    test :: ((2 + 2) ~ 5 => a) -> a
    test x = x
    
    main = putStrLn "Hello, World!"
That will fail to compile. If you change the 5 to 4 then it will compile.


Thank you!

This leaves only the question of cost vs benefits of these approaches :)


maybe a

  type Test (c :: Constraint) = c => ()
is in order here ;)

  testTheObvious :: Test ((2 + 2) ~ 5)
  testTheObvious = ()


That compiles though. You need to CPS the constraint as in my example above. This does the job:

    type Test (c :: Constraint) = forall a. (c => a) -> a
    
    testTheObvious :: Test ((2 + 2) ~ 5)
    testTheObvious = id




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: