- Parallel Computing Course - Stanford CS149, Fall 2023
- Performance-Aware Programming Series by Casey Muratori
- Algorithms for Modern Hardware
- Computer Systems: A Programmer's Perspective, 3/E - by Randal E. Bryant and David R. O'Hallaron, Carnegie Mellon University
- Performance Engineering Of Software Systems - am MITOCW course
- Parallel Programming 2020 by NHR@FAU
- Cpu Caches and Why You Care - by Scott Meyers
- [Optimizing a ring buffer for throughput](https://rig
WARNING: Article moved to separate repo to allow users contributions: https://github.com/raysan5/custom_game_engines
A couple of weeks ago I played (and finished) A Plague Tale, a game by Asobo Studio. I was really captivated by the game, not only by the beautiful graphics but also by the story and the locations in the game. I decided to investigate a bit about the game tech and I was surprised to see it was developed with a custom engine by a relatively small studio. I know there are some companies using custom engines but it's very difficult to find a detailed market study with that kind of information curated and updated. So this article.
Nowadays lots of companies choose engines like [Unreal](https:
| -- | Fold over the input, folding left or right depending on the element. | |
| origami :: (s -> l -> s) -> (r -> s -> s) -> s -> [Either l r] -> s | |
| origami _ _ nil [] = nil | |
| origami fl fr nil (x:xs) = | |
| case x of | |
| Left l -> origami fl fr (fl nil l) xs | |
| Right r -> fr r (origami fl fr nil xs) |
I made a way to get more free stuff and free stuff is good.
The current implementation of deriveVia is here, it works with all the examples here. Needs GHC 8.2 and th-desugar.
for new Haskellers to get pampered by their compiler. For the price of a line or two the compiler offers to do your job, to write uninteresting code for you (in the form of type classes) such as equality, comparison, serialization, ... in the case of 3-D vectors
| #!/boot/bzImage | |
| # Linux kernel userspace initialization code, translated to bash | |
| # (Minus floppy disk handling, because seriously, it's 2017.) | |
| # Not 100% accurate, but gives you a good idea of how kernel init works | |
| # GPLv2, Copyright 2017 Hector Martin <marcan@marcan.st> | |
| # Based on Linux 4.10-rc2. | |
| # Note: pretend chroot is a builtin and affects the current process | |
| # Note: kernel actually uses major/minor device numbers instead of device name |
At work, I just spent the last few weeks exploring and evaluating every format I could find, and my number one criteria was whether they supported sum types. I was especially interested in schema languages in which I could describe my types and then some standard specifies how to encode them using an on-the-wire format, usually JSON.
- Swagger represents sum types like Scala does, using subtyping. So you have a parent type
EitherIntStringwith two subtypesLeftandRightrepresented as{"discriminator": "Left", value: 42}and{"discriminator": "Right", value": "foo"}. Unfortunately, unlike in Scala in which the parent type is abstract and cannot be instantiated, in Swagger it looks like the parent type is concrete, so when you specify that your input is anEitherIntString, you might receive{"discriminator": "EitherIntString"}instead of one of its two subtypes. - JSON-schema supports unions, which isn't quite the same thing as sum types because
Every application ever written can be viewed as some sort of transformation on data. Data can come from different sources, such as a network or a file or user input or the Large Hadron Collider. It can come from many sources all at once to be merged and aggregated in interesting ways, and it can be produced into many different output sinks, such as a network or files or graphical user interfaces. You might produce your output all at once, as a big data dump at the end of the world (right before your program shuts down), or you might produce it more incrementally. Every application fits into this model.
The scalaz-stream project is an attempt to make it easy to construct, test and scale programs that fit within this model (which is to say, everything). It does this by providing an abstraction around a "stream" of data, which is really just this notion of some number of data being sequentially pulled out of some unspecified data source. On top of this abstraction, sca
| // Define the general Arg type and companion object: | |
| import language.higherKinds, language.implicitConversions, language.existentials | |
| object Arg { implicit def toArg[Tc[_], T: Tc](t: T): Arg[T, Tc] = Arg(t, implicitly[Tc[T]]) } | |
| case class Arg[T, Tc[_]](value: T, typeclass: Tc[T]) | |
| // Say, for example we have a typeclass for getting the length of something, with a few instances | |
| trait Lengthable[T] { def length(t: T): Int } | |
| implicit val intLength = new Lengthable[Int] { def length(i: Int) = 1 } | |
| implicit val stringLength = new Lengthable[String] { def length(s: String) = s.length } |
| // Define the following traits and companion object | |
| // It's in Rapture Core (https://github.com/propensive/rapture-core) if you don't want to | |
| trait LowPriorityDefaultsTo { implicit def fallback[T, S]: DefaultsTo[T, S] = null } | |
| object DefaultsTo extends LowPriorityDefaultsTo { implicit def defaultDefaultsTo[T]: DefaultsTo[T, T] = null } | |
| trait DefaultsTo[T, S] | |
| // Then, assuming we want to specify a default for a type class like `Namer`, | |
| case class Namer[T](name: String) |
| import spire.algebra._ | |
| import spire.implicits._ | |
| object Transducer { | |
| type RF[R, A] = (R, A) => R | |
| def apply[A, B](f: A => B) = | |
| new Transducer[B, A] { | |
| def apply[R](rf: RF[R, B]): RF[R, A] = (r, a) => rf(r, f(a)) | |
| } |
