"It's done in hardware so it's cheap"

It isn't.

This is one of these things that look very obvious to me, to the point where it seems not worth discussing. However, I've heard the idea that "hardware magically makes things cheap" from several PhDs over the years. So apparently, if you aren't into hardware, it's not obvious at all.

So why doesn't "hardware support" automatically translate to "low cost"/"efficiency"? The short answer is, hardware is an electric circuit and you can't do magic with that, there are rules. So what are the rules? We know that hardware support does help at times. When does it, and when doesn't it?

To see the limitations of hardware support, let's first look at what hardware can do to speed things up. Roughly, you can really only do two things:

  1. Specialization - save dispatching costs in speed and energy.
  2. Parallelization - save time, but not energy, by throwing more hardware at the job.

Let's briefly look at examples of these two speed-up methods – and then some examples where hardware support does nothing for you, because none of the two methods helps. We'll only consider run time and energy per operation, ignoring silicon area (considering a third variable just makes it too hairy).

I'll also discuss the difference between real costs of operations and the price you pay for these operations, and argue that in the long run, costs are more stable and more important than prices.

Specialization: cheaper dispatching

If you want to extract bits 3 to 7 of a 32-bit word and then multiply them by 13 – let's say an encryption algorithm requires this – you can have an instruction doing just that. That will be faster and use less energy than, say, using bitwise AND, shift & multiplication instructions.

Why – what costs were cut out? The costs of dispatching individual operations – circuitry controlling which operation is executed, where the inputs come from and where the outputs go.

Specialization can be taken to an extreme. For instance, if you want a piece of hardware doing nothing but JPEG decoding, you can bring dispatching costs close to zero by having a single "instruction" – "decode a JPEG image". Then you have no flexibility – and none of the "overhead" circuitry found in more flexible machines (memory for storing instructions, logic for decoding these instructions, multiplexers choosing the registers that inputs come from based on these instructions, etc.)

Before moving on, let's look a little closer at why we won here:

  • We got a speed-up because the operations were fast to begin with – so dispatching costs dominated. With specialization, we need 4 wires connected directly to bits 3 to 7 that have tiny physical delay – just the time it takes the signal to travel to a nearby multiplier-by-13. Without specialization, we'd use a shifter shifting by a configurable amount of bits – 3 in our case but not always – which is a bunch of gates introducing a much larger delay. On top of that, since we'd be using several such circuits communicating through registers (let's say we're on a RISC CPU), we'd have delays due to reading and writing registers, delays due to selecting registers from a large register file, etc. With all this taken out by having a specialized instruction, no wonder we're seeing a big speed-up.
  • Likewise, we'll see lower energy consumption because the operations didn't require a lot of energy to begin with. Roughly, most of the energy is consumed when a signal value changes from 1 to 0 or back. When we use general-purpose instructions, most of the gate inputs & outputs and most flip-flops changing their values are those implementing the dispatching. When we use a specialized instruction, most of the switching is gone.

This means that, unsurprisingly, there's a limit to efficiency – the fundamental cost of the operations we need to do, which can't be cut.

When the operations themselves are costly enough – for instance, memory access or floating point operations – then their cost dominates the cost of dispatching. So specialized instructions that cut dispatching costs will give us little or nothing.

Parallelization: throwing more hardware at the job

What to do when specialization doesn't help? We can simply have N processors instead of one. For the parts that can be parallelized, we'll cut the run time by N – but spend the same amount of energy. So things got faster but not necessarily cheaper. A fixed power budget limits parallelization – as does a fixed budget of, well, money (the price of a 1000-CPU rack is still not trivial today).

[Why have multicore chips if it saves no energy? Because a multicore chip is cheaper than many single core ones, and because, above a certain frequency, many low-frequency cores use less energy than few high-frequency ones.]

We can combine parallelization with specialization – in fact it's done very frequently. Actually a JPEG decoder mentioned above would do that – a lot of its specialized circuits would execute in parallel.

Another example is how SIMD or SIMT processors broadcast a single instruction to multiple execution units. This way, we get only a speed-up, but no energy savings at the execution unit level: instead of one floating point ALU, we now have 4, or 32, etc. We do, however, get energy savings at the dispatching level – we save on program memory and decoding logic. As always with specialization, we pay in flexibility – we can't have our ALUs do different things at the same time, as some programs might want to.

Why do we see more single-precision floating point SIMD than double-precision SIMD? Because the higher the raw cost of operations, the less we save by specialization, and SIMD is a sort of specialization. If we have to pay for double-precision ALUs, why not put each in a full-blown CPU core? That way, at least we get the most flexibility, which means more opportunities to actually use the hardware rather than keeping it idle.

(It's really more complicated than that because SIMD can actually be a more useful programming model than multiple threads or processes in some cases, but we won't dwell on that.)

What can't be done

Now that we know what can be done – and there really isn't anything else – we basically already also know what can't be done. Let's look at some examples.

Precision costs are forever

8-bit integers are fundamentally more efficient than 32-bit floating point, and no hardware support for any sort of floating point operations can change this.

For one thing, multiplier circuit size (and energy consumption) is roughly quadratic in the size of inputs. IEEE 32b floating point numbers have 23b mantissas, so multiplying them means a ~9x larger circuit than an 8×8-bit multiplier with the same throughput. Another cost, linear in size, is that you need more memory, flip-flops and wires to store and transfer a float than an int8.

(People are more often aware of this one because SIMD instruction sets usually have fixed-sized registers which can be used to keep, say, 4 floats or 16 uint8s. However, this makes people underestimate the overhead of floating point as 4x – when it's more like 9x if you look at multiplying mantissas, not to mention handling exponents. Even int16 is 4x more costly to multiply than int8, not 2x as the storage space difference makes one guess.)

We design our own chips, and occasionally people say that it'd be nice to have a chip with, say, 256 floating point ALUs. This sounds economically nonsensical – sure it's nice and it's also quite obvious, so if nobody makes such chips at a budget similar to ours, it must be impossible, so why ask?

But actually it's a rather sensible suggestion, in that you can make a chip with 256 ALUs that is more efficient than anything on the market for what you do, but not flexible enough to be marketed as a general-purpose computer. That's precisely what specialization does.

However, specialization only helps with operations which are cheap enough to begin with compared to the cost of dispatching. So this can work with low-precision ALUs, but not with high-precision ALUs. With high-precision ALUs, the raw cost of operations would exceed our power budget, even if dispatching costs were zero.

Memory indirection costs are forever

I mentioned this in my old needlessly combative write-up about "high-level CPUs". There's this idea that we can have a machine that makes "high-level languages" run fast, and that they're really only slow because we're running on "C machines" as opposed to Lisp machines/Ruby machines/etc.

Leaving aside the question of what "high-level language" means (I really don't find it obvious at all, but never mind), object-orientation and dynamic typing frequently result in indirection: pointers instead of values and pointers to pointers instead of pointers. Sometimes it's done for no apparent reason – for instance, Erlang strings that are kept as linked lists of ints. (Why do people even like linked lists as "the" data structure and head/tail recursion as "the" control structure? But I digress.)

This kind of thing can never be sped up by specialization, because memory access fundamentally takes quite a lot of time and energy, and when you do p->a, you need one such access, and when you do p->q->a, you need two, hence you'll spend twice the time. Having a single "LOAD_LOAD" instruction instead of two – LOAD followed by a LOAD – does nothing for you.

All you can do is parallelization - throw more hardware at the problem, N processors instead of one. You can, alternatively, combine parallelization with specialization, similarly to N-way floating point SIMD that's somewhat cheaper than having N full-blown processors. For example, you could have several load-store units and several cache banks and a multiple-issue processor. Than if you had to run p1->q1->a and somewhere near that, p2->q2->b, and the pointers point into different banks, some of the 4 LOADs would end up running in parallel, without having several processors.

But, similarly to low-precision math being cheaper whatever the merits of floating point SIMD, one memory access is always twice cheaper than two despite the merits of cache banking and multiple issue. Specifically, doubling the memory access throughput roughly doubles the energy cost. This can sometimes be better than simply using two processors, but it's a non-trivial cost and will always be.

A note about latency

We could discuss other examples but these two are among the most popular – floating point support is a favorite among math geeks, and memory indirection support is a favorite among language geeks. So we'll move on to a general conclusion – but first, we should mention the difference between latency costs and throughput costs.

In our two examples, we only discussed throughput costs. A floating point ALU with a given throughout uses more energy than an int8 ALU. Two memory banks with a given throughput use about twice the energy of a single memory bank with half the throughput. This, together with the relatively high costs of these operations compared to the costs of dispatching them, made us conclude that we have nothing to do.

In reality, the high latency of such heavyweight operations can be the bigger problem than our inability to increase their throughput without paying a high price in energy. For example, consider the instruction sequence:

c = FIRST(a,b)
e = SECOND(c,d)

If FIRST has a low latency, then we'll quickly proceed to SECOND. If FIRST has a high latency, then SECOND will have to wait for that amount of time, even if FIRST has excellent throughput. Say, if FIRST is a LOAD, being able to issue a LOAD every cycle doesn't help if SECOND depends on the result of that LOAD and the LOAD latency is 5 cycles.

A large part of computer architecture is various methods for dealing with these latencies – VLIW, out-of-order, barrel processors/SIMT, etc. These are all forms of parallelization – finding something to do in parallel with the high-latency instruction. A barrel processor helps when you have many threads. An out-of-order processor helps when you have nearby independent instructions in the same thread. And so on.

Just like having N processors, all these parallelization methods don't lower dispatching costs - in fact, they raise them (more registers, higher issue bandwidth, tricky dispatching logic, etc.) The processor doesn't become more energy efficient - you get more done per unit of time but not per unit of energy. A simple processor would be stuck at the FIRST instruction, while a more clever one would find something to do – and spend the energy to do it.

So latency is a very important problem with fundamentally heavyweight operations, and machinery for hiding this latency is extremely consequential for execution speed. But fighting latency using any of the available methods is just a special case of parallelization, and in this sense not fundamentally different from simply having many cores in terms of energy consumed.

The upshot is that parallelization, whether it's having many cores or having single-core latency-hiding circuitry, can help you with execution speed – throughput per cycle – but not with energy efficiency – throughput per watt.

The latency of heavyweight stuff is important and not hopeless; its throughput is important and hopeless.

Cost vs price

"But on my GPU, floating point operations are actually as fast as int8 operations! How about that?"

Well, a bus ticket can be cheaper than the price of getting to the same place in a taxi. The bus ticket will be cheaper even if you're the only passenger, in which case the real cost of getting from A to B in a bus is surely higher than the cost of getting from A to B in a taxi. Moreover, a bus might take you there more quickly if there are lanes reserved for buses that taxis are not allowed to use.

It's basically a cost vs price thing – math and physics vs economics and marketing. The fundamentals only say that a hardware vendor always can make int8 cheaper than float – but they can have good reasons not to. It's not that they made floats as cheap as int8 – actually, they made int8 as expensive as floats in terms of real costs.

Just like you going alone in a bus designed to carry dozens of people is an inefficient use of a bus, using float ALUs to process what could be int8 numbers is an inefficient use of float ALUs. Similarly, just like transport regulations can make lanes available for buses but not cars, an instruction set can make fetching a float easy but make fetching a single byte hard (no load byte/load byte with sign extension instructions). But cars could use those lanes – and loading bytes could be made easy.

As a passenger, of course you will use the bus and not the taxi, because economics and/or marketing and/or regulations made it the cheaper option in terms of price. Perhaps it's so because the bus is cheaper overall, with all the passengers it carries during rush hours. Perhaps it's so because the bus is a part of the contract with your employer – it's a bus carrying employees towards a nearby something. And perhaps it's so because the bus is subsidized by the government. Whatever the reason, you go ahead and use the cheaper bus.

Likewise, as a programmer, if you're handed a platform where floating point is not more expensive or even cheaper than int8, it is perhaps wise to use floating point everywhere. The only things to note are, the vendor could have given you better int8 performance; and, at some point, a platform might emerge that you want to target and where int8 is much more efficient than float.

The upshot is that it's possible to lower the price of floating point relative to int8, but not the cost.

What's more "important" – prices or costs?

Prices have many nice properties that real costs don't have. For instance, all prices can be compared – just convert them all to your currency of choice. Real costs are hard to compare without prices: is 2x less time for 3x more energy better or worse?

In any discussion about "fundamental real costs", there tend to be hidden assumptions about prices. For example, I chose to ignore area in this discussion under the assumption that area is usually less important than power. What makes this assumption true – or false – is the prices fabs charge for silicon production, the sort of cooling solutions that are marketable today (a desktop fan could be used to cool a cell phone but you couldn't sell that phone), etc. It's really hard to separate costs from prices.

Here's a computer architect's argument to the effect of "look at prices, not costs":

While technical metrics like performance, power, and programmer effort make up for nice fuzzy debates, it is pivotal for every computer guy to understand that “Dollar” is the one metric that rules them all. The other metrics are just sub-metrics derived from the dollar: Performance matters because that’s what customers pay for; power matters because it allows OEMs to put cheaper, smaller batteries and reduce people’s electricity bills; and programmer effort matters because it reduces the cost of making software.

I have two objections: that prices are the effect, not the cause, and that prices are too volatile to commit to memory as a "fundamental".

Prices are the effect in the sense that, customers pay for performance because it matters, not "performance matters because customers pay for it". Or, more precisely – customers pay for performance because it matters to them. As a result – because customers pay for it – performance matters to vendors. Ultimately, the first cause is that performance matters, not that it sells.

The other thing about prices is that they're rather jittery. Even a price index designed for stability such as S&P 500 is jumping up and down like crazy. In a changing world, knowledge about costs has a longer shelf life than knowledge about prices.

For instance, power is considered cheap for desktops but expensive for servers and really expensive for mobile devices. In reality, desktops likely consume more power than servers, there being more desktops than servers. So the real costs are not like the prices – and prices change; the rise of mobile computing means rising prices for power-hungry architectures.

It seems to me that, taking the long view, the following makes sense:

  • It's best to reason in costs and project them to the relevant prices – not forget the underlying costs and "think in prices", so as to not get into habits that will become outdated when prices change.
  • If you see a high real cost "hidden" by contemporary prices, it's a good bet to assume that at some point in the future, prices will shift so that the real cost will rear its ugly head.

For example, any RISC architecture – ARM, MIPS, PowerPC, etc. – is fundamentally cheaper than, specifically, x86, in at least two ways: hardware costs – area & power – and the costs of developing said hardware. [At least so I believe; let's say that it's not as significant in my view than my other more basic examples, and I might be wrong and I'm only using this as an illustration.]

In the long run, this spells doom for the x86, whatever momentum it otherwise has at any point in time – software compatibility costs, Intel's manufacturing capabilities vs competitors capabilities, etc. Mathematically or physically fundamental costs will, in the long run, trump everything else.

In the long run, there is no x86, no ARM, no Windows, no iPhone, etc. There are just ideas. We remember ideas originating in ancient Greece and Rome, but no products. Every product is eventually outsold by another product. Old software is forgotten and old fabs rot. But fundamentals are forever. An idea that is sufficiently more costly fundamentally than a competing idea can not survive.

This is why I disagree with the following quote by Bob Colwell – the chief architect of the Pentium Pro (BTW, I love the interview and intend to publish a summary of the entire 160-something page document):

…you might say that CISC only stayed viable because Intel was able to throw a lot of money and people at it, and die size, bigger chips and so on.

In that sense, RISC still was better, which is what was claimed all along. And I said you know, there's point to be made there. I agree with you that Intel had more to do to stay competitive. They were starting a race from far behind the start line. But if you can throw money at a problem then, it's not really so fundamental technologically, is it? We look for more deep things than that, so if all the RISC/CISC thing amounted to was, you had a slight advantage economically, well, that's not as profound as it seemed back in the 80s was it?

Well, here's my counter-argument and it's not technical. The technical argument would be, CISC is worse, to the point where Intel's 32nm Medfield performs about as well as ARM-based 40nm chips in a space where power matters. Which can be countered with an economical argument – so what, Intel does have a better manufacturing ability so who cares, they still compete.

But my non-technical argument is, sure, you can be extremely savvy business-wise, and perhaps, if Intel realized early on how big mobile is going to be, they'd make a good enough x86-based offering back then and then everyone would have been locked out due to software compatibility issues and they'd reign like they reign in the desktop market.

But you can't do that forever. Every company is going to lose to some company at some point or other because you only need one big mistake and you'll make it, you'll ignore a single emerging market and that will be the end. Or, someone will outperform you technically – build a better fab, etc. If an idea is only ("only"?) being dragged into the future kicking and screaming by a very business-savvy and technically excellent company, then the idea has no chance.

The idea that will win is the idea that every new product will use. New products always beat old products – always have.

And nobody, nobody at all has made a new CISC architecture in ages. Intel will lose to a company or companies making RISC CPUs because nobody makes anything else – and it has to lose to someone. Right now it seems like it's ARM but it doesn't matter how it comes out in this round. It will happen at some point or other.

And if ARM beats x86, it won't be, straightforwardly, "because RISC is better" – x86 will have lost for business reasons, and it could have gone the other way for business reasons. But the fact that it will have lost to a RISC – that will be because RISC is technically better. That's why there's no CISC competitor to lose to.

Or, if you dismiss this with the sensible "in the long run, we're all dead" – then, well, if you're alive right now and you're designing hardware, you are not making a CISC processor, are you? QED, not?

Getting back to our subject – based on the assumption that real costs matter, I believe that ugly, specialized hardware is forever. It doesn't matter how much money is poured into general-purpose computing, by whom and why. You will always have sufficiently important tasks that can be accomplished 10x or 100x more cheaply by using fundamentally cheap operations, and it will pay off for someone to make the ugly hardware and write the ugly, low-level code doing low-precision arithmetic to make it work.

And, on the other hand, the market for general-purpose hardware is always going to be huge, in particular, because there are so many things that must be done where specialization fundamentally doesn't help at all.

Conclusion

Hardware can only deliver "efficiency miracles" for operations that are fundamentally cheap to begin with. This is done by lowering dispatching costs and so increasing throughput per unit of energy. The price paid is reduced flexibility.

Some operations, such as high-precision arithmetic and memory access, are fundamentally expensive in terms of energy consumed to reach a given throughput. With these, hardware can still give you more speed through parallelization, but at an energy cost that may be prohibitive.

138 comments ↓

#1 Tomasz Wegrzanowski on 08.04.12 at 9:00 am

You're totally confused. Memory indirection can be optimized perfectly well – every single CPU does precisely that with its virtual memory system. TLB caches are far older than regular memory caches, and if you were right then any modern operating system would be far slower than running DOS on the same hardware. TLB caches make all memory indirection related to virtual memory access essentially free. Without them computers would be how many times slower than they are now?

Lisp hardware had similar optimizations for memory indirection built into their cache systems. And modern CPUs do various indirect branch prediction etc.

#2 Yossi Kreinin on 08.04.12 at 10:33 am

I'm talking about, specifically, p->q->a vs p->a or something along those lines. What you're talking about is (1) TLB optimizations and (2) latency of dependent instructions (that's what branch prediction is about). I'm talking about the raw access to cache memory banks – not the cost of missing the cache, not the cost of address translation, not the cost of waiting for the previous instruction (though I discussed the latter). I'm talking about the raw cost in energy of local memory throughput.

#3 Rune Holm on 08.04.12 at 12:01 pm

Tomasz brings up an interesting point – although not free (address translation + memory coherence is the main reason why L1 caches in most processors are limited in size to the page size times the number of sets in the cache), virtual address translation are an example of hardware optimizing a chain of loads into a page table in a manner that is more efficient than what you could do in software.

TLBs cache the result of the full set of lookups into the page table.
However, TLBs work for the following reasons:

(1) There's a high probability that the entire set of loads will be reused in the future, (i.e. you're accessing the same page again), and the working set is tiny. In order to provide the illusion of being free, a TLB that runs in parallel with the L1 cache needs to be implemented in flip-flops rather than an SRAM, which limits the practical size to 16-64 entries.

(2) There's a very high read to write ratio, so the TLB can assume the page tables are effectively read-only, and drop the set of cached entries on a page table update

If we try to apply this to e.g. strings stored as single-linked lists, the cacheability and working set assumption (1) is easily violated. Similarly, if we apply this to a high-level language virtual machine like CPython where every value is a pointer to an object, both assumption (1) and (2) are violated.

I think the take-away from this is that while there are very specific circumstances where memory indirection can be optimized with a hardware feature, the resulting hardware feature is going to come with many caveats, rendering them hard to use for typical high-level language implementations.

#4 Jose on 08.04.12 at 1:10 pm

Hi,

I don't agree on some areas. I'm electronics engineer and had designed hardware too.

We agree that parallel means losing flexibility but we don't agree in that it does not take less power. If you use 1000 units to do the same that a CPU does at 1000x less frequency and power consumption models to the square of frequency you are wasting 1000x/((1000)*(1000))=> 1000 times less energy.

You only need 100-120Hz for video, less for audio, because the brain is a low frequency, massive parallel computer itself. If I do a CAD/CAE simulation for a design I don't care about 1/100 delay and so on. There are lots of applications for hardware acceleration.

Energy does not model exactly to the square of frequency, specially at low frec but you can also shut down hardware modules you don't use and all that combined with what you already stated.

You could design a real module that spends 300-200 times less power than what the CPU does. In fact you could see them in every phone, iPad or macbook air.

It seems to me that you don't know what you are talking about in the big picture, you see trees but can't see the forest.

#5 Yossi Kreinin on 08.04.12 at 9:52 pm

@Rune Holm: Of course managing the TLB contents is nothing like managing the actual cache contents – which is why the original comment is totally irrelevant :) There's a lot to do with loads & stores in many specific cases, but not if you have "sufficiently random addresses".

@Jose: well, I mentioned your point in the part about multicore; several slow cores being more energy efficient than one fast one is the same effect as "a real module" running at a low frequency spending less energy than a high-frequency CPU.

However, there's a certain minimal frequency where you gain nothing by lowering it still more. To me, it's interesting what happens at about that range of frequencies, because what happens above it is grossly inefficient in terms of energy consumption and people only target their designs (CPUs, mostly) at such "unhealthy" frequencies for programmer/consumer convenience. And I'm not in the consumer market. BTW, 100MHz at 40nm, for instance, is way below that minimal frequency for a reasonably pipelined design.

Also, this is a second comment pointing out a mistake I didn't even make, and not in a very polite manner. Oh, the joy of the web.

#6 Luke on 08.04.12 at 10:53 pm

Haters gonna hate…

I'm usually a lurker of your content – which I enjoy greatly.. your subject matter this time is going to draw out "the experts"

#7 Morris on 08.05.12 at 12:29 am

"I believe that ugly, specialized hardware is forever."

Is awfully close to an argument for CISC…

#8 Yossi Kreinin on 08.05.12 at 12:51 am

@Luke: glad you like my stuff.

@Morris: with CISC it would be "ugly, general-purpose hardware"; I sort of think ugliness is a price that should buy you something, and with specialisation it buys you efficiency but without, it could only buy compatibility – which is worth a fortune until people toss their old toys and buy new toys and then it's worth nothing.

#9 Wody on 08.05.12 at 6:45 am

You are wrong about Intel… They don't make CISC processors. All their designs have been RISC-model since 1995. It's just that this has been hidden under the x86 instruction set, which they emulate since the pentium pro.

#10 Alex on 08.05.12 at 7:12 am

Wody, the X86 instruction set is a CISC-style instruction set. Converting the instructions to RISC-style instruction set doesn't make the CPU a RISC CPU, it just means there's overhead that a true RISC CPU wouldn't have to deal with.

#11 tz on 08.05.12 at 7:19 am

Excellent points, though I would throw in maybe adding a FPLD next to the CPU so you can download "specialization" once. Or the CPU cores with DSP like on my Nokia n810. I guess the first question is whether the specialization is intrinsic or temporary to the whole system. There will be time/energy costs in setup, but then it runs fast.

It also goes further. QR Codes have part of the algorithm that uses mod 3 as the operator, however if you do a bit of unrolling you can remove the divide operations and I think I only have one 8×8 multiply:

https://github.com/tz1/qrduino

I'll also add an Amen to the object oriented overload. I don't have a problem with OOP per se – you can do it in plain C, if that paradigm is the right one to solve a problem. But there seems to have been a separate philosophy being taught that everything should be atomized, abstracted, obfuscated, so it becomes setvalueofA(getvalueofB()) and hope the compiler/linker is good enough to untangle it into the load-store.

This was also one reason that Windows CE was such a failure at the time (when it was competing with the Palm Pilot). You can't fool power usages or a battery. The Palm used very efficient routines – basically an original Macintosh processor and code updated. CE ported the Windows kernel with all the bloat and baggage (it had an interrupt jitter making it all but unusable for embedded). If something takes 10x the gates or cycles, or whatever resource, it will drain the battery 10x faster. Adding faster hardware is usually quadratic, i.e. to make it go 2x faster, it will eat 4x the power (and parallelization isn't free – the extra routing adds gates).

The only resource that is rarely tight today is memory, so it is often easier to create tables of results rather than efficient operations (e.g. Rainbow Tables for breaking encryption). That can be traded off for speed and power in some cases. But that doesn't apply to microcontrollers with only a few K.

#12 Jussi K on 08.05.12 at 7:45 am

Regarding CISC vs RISC, what do you think about memory efficiency of instruction encodings?

Most RISC designs I'm aware of use a fixed width (usually 32-bit) encoding, whereas I think x86 instructions average out to about 3.5 bytes per instruction, allowing more instructions to fit in a code cache line. On top of that, allowing instructions to access memory directly can eliminate entire load/store instructions compared to RISC.

Of course, optimizing encoding length is not CISC-specific as such (as evidenced by Thumb), but it would seem that CISC in general would enable shorter instructions overall, possibly leading to better memory utilization.

#13 Yossi Kreinin on 08.05.12 at 9:30 am

@tz: I actually think of an FPLD/FPGA as a specialized architecture (rather than "clean slate you can do anything with) – I have a draft about this bit. Likewise, a DSP is a specialized architecture – good at a fraction of things, bad at most. Actually "specialization" is an annoyingly fuzzy term because it's not clear what "general-purpose" means; in this post I allowed myself to use the term because the context was gaining efficiency through supporting what you want in hardware and here, I can implicitly assume that you have a less efficient baseline which you consider something done "without hardware support" – and relative to that hypothetical baseline, the term makes sense. In a broader context, I think there's no such thing as "general purpose hardware" – there are just many different architectures, CPU is one family, FPGA is another, DSP a third, etc. In the broad sense though, we can call X more "general purpose" than Y if it runs more lines of code or sells more units or has more programmers targeting it or a combination; in this sense, I think clearly CPUs are the winner, and in a business sense, "general purpose" is a sensible term even though it's somewhat fuzzy technically.

#14 Yossi Kreinin on 08.05.12 at 9:40 am

@Jussi K: x86 binaries, specifically, are indeed somewhat denser than RISC binaries (and ARM binaries are a bit denser than MIPS binaries, for instance, so there are differences within the RISC family, too.)

The thing is, code size did use to be an issue in the old days, but it rarely is these days – most memory is used by data, most instruction accesses hit the cache, and in terms of energy, what matters is the width of your instruction cache memory bus and what it takes to decode the instructions once you fetch them; I believe RISC is cheaper to decode overall.

This shows though that trade-offs are nearly impossible to get right in any kind of a "timeless" fashion. If you have a hairy variable-length encoding at a time when memory is really scarce, and your other option is to have a fixed-sized encoding, presumably the reasonable thing is to look at the prices of the two – contemporary prices – and choose to conserve memory. You know that you made decoding complicated and that's a spending in terms of resources that can come back and byte you when prices shift. But it's still the right trade-off at contemporary prices.

So the only case where "looking at resources" is the sensible thing is if your choice is, waste something or not waste it, without a trade-off; this is the case with int8 vs float – but not really, not if you add, say, development effort to the mix… It's only sensible if you frame the problem as "hardware costs", which I think made sense in the context of my write-up where I try to explain what hardware can and cannot do, but not sensible in a broader case. With trade-offs, I guess all you can do is realize you made one, in terms of fundamental resources, and then watch out for price shifts…

#15 Mattias Ernelli on 08.05.12 at 11:40 am

@Yossi: FPGA's trade performance for flexibility (Programmable routing networks are slower than metal conductors) . I'm not sure whether that also means higher cost in terms of energy consumption. E.g same application on FPGA consumes less energy than GPU or CPU.

There are very few applications where FPGA based solutions can be compared with CPU/GPU, the only one I can think of is Bitcoin hash-mining. According to:

https://en.bitcoin.it/wiki/Mining_hardware_comparison

FPGA's seems about 10-20x more energy efficient than GPU, and GPU's are in itself 10x more energy efficient than CPU's.

Anyway, FPGA's do have substantial advantages over CPU's and GPU's in terms of being able to precisely fit arithmetic precision. If you need a 12 bit adder, you use resources for a 12 bit adder. Not resources for a 16 bit adder, etc.

Also FPGA's can do bit extraction and manipulation hardwired and not by utilizing generic shift/and instructions as your "extract bit 3:7" example.

The largest drawback still I think is that FPGA's are usually programmed by hardware designers and they are diametrically opposite of software developers in terms of development culture. Their tools are low level timing diagrams and ancient programming languages such as VHDL. Even though there has been C->VHDL compilers developed or direct Synthesis of C code, FPGA's are still programmed using VHDL or Verilog.

The biggest hurdle still for a software developer to experiment with FPGA's is that step 1 still is to design and build hardware that uses FPGA's. Even though there are several evaluation boards available, you still need the HDL tools.

#16 Ben D on 08.05.12 at 4:54 pm

Hi Yossi,

Another good article.

It has been a long time you have not published; I was wondering whether you had given up.

Deep down your main points stand:
- one still needs to perform the operations required for the task and there is a lower bound as to how much energy you will need.
- A programmable machine using a set of types and operations is inherently less efficient than a fully specialised component which can use any type and any operation as long as it can be clock/power gated when not needed
- Dollar is all that matters in the end. If someone else can make it so that your customer can make a better buck out of the product, you will lose.

I'll be careful from there, I am a bit worried of braking trade sensitive information from my current or my previous employer.

I'd just want to point out that:
1) SMT is not as bad as you make it be. At least once you went through the hassle of getting a full OOO, multi issue pipeline. I would not be surprised for designs based on the technology to be fashionable again. Barrel processors on the other hand, probably not. I would be even less surprised to see x86 with wider execution pipelines and more than 2 way SMT. But then, I don't work for Intel so it is just one of my guesses on how they will fight lighter many core which seem to be entering the server market. Essentially, unused powered silicon is a full energy loss. Low level clock gating is hard (as in small clock domains), low level power gating is even harder. Also it can be good for 3. Making wider pipelines and good I-cache means that you can boost instruction throughput for specific types of parallel friendly streams, yet having several instruction streams menas that you can perform ok on high latency workloads by running several in parallel. Scheduling threads in such a setting is an intersting issue.

2) Dynamic consumption is a lot less than a problem that one would think in modern devices, leakage is huge on the latest processes, clock distribution is a massive energy hog on high speed designs. At less cutting edge nodes and at lower frequency, it is still very relevant. But we are talking CPUs and GPUs here. Comparing them with special purpose HW is a bit of an difficult comparison. On one side you have something running at 1-2Ghz on cutting edge processes, on the other side something running at 100-400 Mhz a couple nodes behind, most of the time.

3) One real difference in between accelerators and general processors is memory coherency. Accelerators will typically be given private memory and a minimalistic mmu just so that the device memory space can be abstracted. On the other hand, memory coherency in coherent core systems is an arduous issue (and an expensive one in term of power)

#17 Stanislav Datskovskiy on 08.06.12 at 2:10 pm

Time and power consumption are not the only costs imposed by the use of idiot hardware.

What is the total economic cost of the existence of buffer overflows of every kind? They simply don't need to exist. Any of them. Implement hardware bounds checking for *all* array accesses, and hardware type-checking for all arithmetic ops (1970s state of the art!) and buffer overflows vanish, taking with them the lion's share of known security exploits, crashes, and other digital miseries.

#18 Rune Holm on 08.07.12 at 12:29 am

x86 has had the BOUND instruction for hardware-accelerated bounds checking since 80186 in 1982. It also has had the INTO instruction for hardware-accelerated integer arithmetic overflow checks since 8086 in 1978. Not that it seemed to help.

And even for typical web languages that do provide array bounds checking and widening of integer arithmetic into infinite-sized integers, the security problems did not vanish, they just shifted into e.g. SQL injection and cross-site scripting.

I believe secure programming is a programmer and programming language problem, not a hardware problem.

#19 Yossi Kreinin on 08.07.12 at 9:01 am

@Mattias Ernelli: there are many differences in the table even within, say, the ARM family that are hard to understand (not directly related to, say, frequency); I guess knowing more about the systems and the benchmark could help.

@Ben D (glad to hear from you!): I don't think I said much about OOO or SMT – not about how "bad" they were… As to dynamic vs static consumption – it depends on your process. At 40nm LP, static consumption is almost zero at room temperature and still very small around 100C. I don't compare CPUs designed for high frequencies with accelerators designed for low power – there are CPUs designed or at least synthesised for low power. Regarding memory coherence in accelerators – sure, one of the whole slew of things accelerators don't have to worry about, and making CPUs really, really inefficient…

@Stanislav Datskovskiy – I agree with Rune Holm I guess… With the twist that it's really a hardware/software co-design problem – hardware would have the features if much of the software used them, much of the software would use the features if almost all of the hardware had them. It's a chicken and egg problem in a situation where hardware and software evolution is only very loosely coordinated.

#20 Dmitry on 08.08.12 at 7:32 am

Можно найти упоминания, что астрономы из JPL интегрировали движение тел солнечной системы на DEC Alpha со 128-битной точностью (H-floating в терминологии DEC). А потом всё. На маркетинговых листках некоторых карточек NVidia можно прочесть, что какие-то графические вычисления у них якобы делаются со 128-битной точностью, но воспользоваться этим через CUDA всё равно нельзя. Нет таких типов данных. В интеловских процессорах тоже нет (хотя в компиляторах есть, через эмуляцию работают, очень медленно).

Интересно ваше мнение: почему не выпускают процессоры с нативной поддержкой quadruple precision? С точки зрения precision costs их вообще осмысленно делать? Пока что те, кому надо 128 бит, используют double-double арифметику.

#21 Yossi Kreinin on 08.08.12 at 8:10 am

I guess your question is, if you need 128 bits of precision, then is hardware support better than emulation, right? I don't know, really – huge multipliers in particular make no sense starting at some size, I think, better to implement using 4 multiplies using 2x smaller hardware multipliers or some such. But I never cared about this sort of thing so I wouldn't really know where to draw the line.

My guess is that a nice system could use software emulation sped up by fairly trivial hardware support for shuffling bits – extracting/packing mantissas, exponents and signs, dealing with parts of large mantissas, etc. – and that such emulation with rudimentary hardware support would have about the same energy efficiency as full-blown hardware support for 128b floating point; but it's really just a guess.

#22 Yossi Kreinin on 08.08.12 at 8:12 am

Oh, as to why there are no processors with native/nice support for quadruple precision: I think it's just because it's a tiny market – almost no software would use the feature, so why bother. As to processors targeted at supercomputers – perhaps even scientists don't care about quad precision and perhaps they do have some sort of rudimentary support that helps bring emulation to about the same energy efficiency as full-blown hardware support would have; I dunno.

#23 Ken on 08.20.12 at 2:08 pm

Over the years I noticed that hardware beyond the level of the simplest RISC architecture tended to resemble fossilized software intended to do something faster, or cheaper with regard to some resource, or better in some way. Faster and cheaper were usually measurable. Better was much fuzzier.

A real question with floating point arithmetic, "Does it matter that you almost always get a wrong (imprecise) answer?"

#24 Yossi Kreinin on 08.20.12 at 10:02 pm

Well, there's also the question of what's the alternative…

#25 Aristotle Pagaltzis on 11.03.12 at 3:21 am

In the long run, this spells doom for the x86

Seems like the long run will turn out to have been surprisingly short.

#26 Yossi Kreinin on 11.03.12 at 6:43 am

@Aristotle: I think it's still in the "it's not over till it's over" stage, but yeah.

#27 Aristotle Pagaltzis on 12.25.12 at 8:49 am

Looks like it’s in free fall: “Windows PCs sales in U.S. retail stores fell a staggering 21% in the four-week period from October 21 to November 17, compared to the same period the previous year. In short, there is now falling demand for x86 processors.” The rest of that article looks at plenty signs more but none so visceral as that one.

And here someone is making sense about relegating the x86 to the role of coprocessor to an ARM CPU while PCs still need it.

It’s not over but it sure looks like a foregone conclusion at this point.

#28 Aristotle Pagaltzis on 12.25.12 at 8:58 am

(Meanwhile, tangentially, you can run now bonafide RISC OS on an honest to goodness ARM-powered PC – no emulation, the real deal. The Archimedes 3000 I always dreamed of when I was a boy but never had the pocket money to buy before they died – it’s now but a few bucks away. Forgotten youth that never was, here I come…)

#29 Yossi Kreinin on 12.25.12 at 11:02 am

"Free fall" assuming what acceleration constant? :)

I'm with you of course, although the demise of PC as a platform isn't necessarily that great (it's a question how the desktop computer is going to be like).

#30 Aristotle Pagaltzis on 12.25.12 at 6:49 pm

I do not expect the desktop computer to change all that much – unless you mean “PC as a platform” in the narrow Wintel sense and your question is about what OS is going to run on ARM desktops. Microsoft is not likely to deliver one, Apple is highly likely, and I really hope someone else steps into that void. (ChromeOS…? Eh.) But an Apple-only future would be a far bleaker outlook than a Microsoft-only ever one was, in spite of their far better products. (Partly, actually, because of that. People are delighted to have no other choice than Apple, in droves.)

#31 Yossi Kreinin on 12.25.12 at 11:23 pm

I meant "PC as a platform" in the sense of having standard protocols and building your system using devices from disparate vendors that use these protocols; and in the sense of being able to run an OS that the machine didn't come preinstalled with. And even in the narrower sense – Windows is wonderful in ways that nothing else is. It's not as bad as an Apple product in terms of the choices it dares to take away from you and it offers compatibility that a Linux distribution is yet to achieve (including Android – I can't run simple games on my 2-year-old phone without upgrading the OS).

As to Apple: I never found their products to be substantially different from the competition, and I think their design is rather ugly, especially rounded rectangles everywhere. My explanation of their recent runaway success is something that you could call hypnosis, and I give them maybe 5 years, 8 years, tops to fall back into relative obscurity now that the hypnotist has passed away.

#32 Bruce Dawson on 03.18.13 at 4:47 pm

Having programmed for both x86 and PowerPC I found, to my surprise, that I prefer x86. Well, x64 anyway. Yes, it's messy, but some of that mess adds value. For instance, loading constants that use more than 16-bits is trivial on x86 but requires either multiple instructions are reading the constant from a 64-KB section pointed to by a reserved register. This is one instance of a general problem caused by fixed-size instructions — sometimes you don't have enough bits to express the instruction you want. In x86 land you can *always* extend the instruction set.

x86 instructions can also be far more powerful with addressing modes like eax += [const+reg1+reg2*8].

So that's the benefits of x86. The costs, on the other hand, only go down, as the portion of the die associated with decoding becomes a smaller fraction. Looked at that way it seems inevitable that RISC chips in a particular market will have a temporary advantage (when decode size/power matters) but if they fail to win quickly then their advantage will disappear.

Or maybe you're right and Intel will eventually lose. Certainly winning streaks are hard to maintain.

#33 Yossi Kreinin on 03.19.13 at 3:15 am

You mean you programmed in assembly and liked x86 better? That'd make sense; RISC is nicer to target a compiler at, not to hand-code for. As to the portion of the die – look at ARM-based designs with 4 high-end cores and 4 low-end cores. How big would the low-end cores be if they were x86? It's not true that larger chips always use larger cores, they could use more instead, and then chips increasingly shrink over time as power and yield constraints prevent us from fully reaping Moore's law benefits.

Winning streaks are impossible to maintain forever. Now if you had a CISC contender, then CISC would have a chance in the long run, but there isn't any.

#34 Paul B on 10.14.13 at 5:56 am

@Aristotle: My opinion is that idea about combining an A5 with an x64 is mad. We're not yet at the point in heterogeneous computing where that makes sense. Wait for a few more years until the GPU is a first class citizen of the processor interconnect fabric. Right now it's on the wrong end of a PCIe link and all access will need an arbitration chip or some equally ugly hack (not that that doesn't have recent examples like Optimus v1).

@Yossi, et-al (mostly et-al)
Like you I expect RISC based designs to dominate long-term but, while nothing lasts forever, forever is a long time. The other comment on this, mostly applicable to ARM and POWER is that RISC is generally taken as a base to build on, complexity is added back in every time something turns out to work better with a specialised instruction e.g. SIMD.

#35 Yushill L on 09.10.15 at 12:44 pm

Excellent reading, thanks for that.

Though almost two years after the last post, I still feel the need to react on that misleading RISC vs CISC debate.

RISC and CISC terms have been extrapolated to qualify very different notions: variable length of the instruction set, symmetry of the encoding, decoupling memory accesses from computations, general-purpose registers vs specialized ones. However, initially the only difference between RISC and CISC terms is Complex vs Simple (Reduced). Of course defining the exact boundary between Complex and Simple is a hard task.

In modern out-of-order processors, the complexity of decoding is not that important (though I must admit that x86[_64] brought instruction decoding to an unsuspected level of brainf***king). Maybe one of the major instruction feature that needs to be decoded as soon as possible is whether or not the instruction is a branch, to do everything that is possible to prevent breaking the instruction fetch. In a second step, easy identification of register dependencies is really nice to have. So-called RISC architectures such as Power, ARMv8, Alpha (and others) do provide simple decoding. But ARMv7 (RISC?) makes branch identification almost harder than in x86 (CISC). In fact, the first pages of the ARMv8 specifications acknowledge, in a very interesting discussion, the weakness of the ARMv7.

In the end, I think Wody and Alex are both right: Intel architectures are internally obviously RISC-like; and yes, the Instruction Set is complex. But apart from designing extremely-utltra-super-low-end cores I don't think that the ISA overhead does matter anyway/anymore. If you were to design Instruction Set from scratch you would obviously not do anything close to an x86 ISA, however Intel is not starting from scratch…

#36 zong on 08.29.16 at 2:43 am

I am very late to this discussion.
But i feel you have A FUNDAMENTAL PROBLEM in your logic.

The COST of any given operation you discuss (8/32/load/mulitple/etc) is not fixed. In fact, it has an exponential range of values.

The LOAD instruction takes a pico-joule if it is in an L1 hit. It cantake x10^6 if it triggers some sort of NUMA page access (or whatever).

Generation of architects have brought us a finely tuned eco-system where imperative code, is compiled for a certain instruction set, running on a system with very specific tarde offs.

The amortized cost of an average operation is then somewhat predictable.
As soon you try try a "new" component e.g. excess indirection e.g. fuzzy arithmetic , of course the system is thrown out of wack.

But give it a few decades and there is no reason why a cache system can make a->p1->p2 faster than accessing a[1],a[2]

Similar arguments can be made about many core parallelism taking LESS ABSOLUTE POWER e.g. by always selecting a core closest to the memory or what ever (for by lower frequency as you mentioned).

So your entire argument is therefore limited to the CURRENT status quo rather than an absolute limit.

#37 Yossi Kreinin on 08.29.16 at 8:13 am

"there is no reason why a cache system can make a->p1->p2 faster than accessing a[1],a[2]"

This is correct (though I assume you mean't "can't", that is incorrect.) Try it.

"Similar arguments can be made about many core parallelism taking LESS ABSOLUTE POWER e.g. by always selecting a core closest to the memory or what ever"

They can be made, and they're wrong. Try it.

#38 Additional info on 04.01.19 at 3:28 pm

Thank you, I've just been looking for info about this topic for a while and
yours is the greatest I have found out till now.
But, what in regards to the conclusion? Are you sure
in regards to the source?

#39 moving loaders on 04.15.19 at 3:26 pm

It's amazing for me to have a web site, which is useful in support of my experience.
thanks admin

#40 cabletestingtools.com on 04.18.19 at 6:46 pm

I don't know whether it's just me or if perhaps
everybody else experiencing issues with your website.
It seems like some of the written text in your posts are running off
the screen. Can someone else please comment and
let me know if this is happening to them as well? This might be a
issue with my internet browser because I've had
this happen before. Thank you

#41 Power Level SDL50 on 04.28.19 at 12:25 am

I'd like to thank you for the efforts you have put in penning this website.
I am hoping to check out the same high-grade blog posts from you in the future as well.
In truth, your creative writing abilities has encouraged me to get my own blog now ;)

#42 long distance furniture movers on 05.02.19 at 2:34 am

Ahaa, its nice discussion about this post here at this blog, I have read all that, so at this time me also commenting at this place.

#43 Scanstation p40 on 05.02.19 at 7:57 pm

What's up, its fastidious article regarding media print, we
all be familiar with media is a wonderful source
of data.

#44 TSC2 Manual on 05.08.19 at 3:02 pm

My family members every time say that I am killing my time here at
web, except I know I am getting experience everyday by reading thes good content.

#45 red piter on 05.15.19 at 10:51 pm

Appreciate it for this howling post, I am glad I observed this internet site on yahoo.

#46 TP-L4GV Data Sheet on 05.16.19 at 6:26 pm

Outstanding post but I was wondering if you could write a litte
more on this subject? I'd be very grateful if you could elaborate a little bit further.
Thank you!

#47 Millard Micari on 05.17.19 at 9:22 am

I'm gratified with the way that yosefk.com deals with this type of topic! Generally to the point, often controversial, always well-written and stimulating.

#48 ScanStation C10 on 05.18.19 at 8:50 pm

Hello! I've been reading your web site for some time now and finally got the courage to go ahead and give you
a shout out from Porter Texas! Just wanted to say keep up the fantastic work!

#49 blue hole belize helicopter tour  on 06.01.19 at 9:25 pm

If you are going for best contents like I do, simply pay a visit this web
site all the time since it provides quality contents,
thanks

#50 야마토게임다운로드 on 06.06.19 at 10:54 am

My partner and I absolutely love your blog and find the
majority of your post's to be precisely what I'm looking for.
Does one offer guest writers to write content available for you?
I wouldn't mind producing a post or elaborating on a number of the subjects
you write in relation to here. Again, awesome weblog!

#51 먹튀 폴리스 on 06.07.19 at 8:33 pm

You could definitely see your skills within the article you write.
The arena hopes for more passionate writers like you who aren't afraid to say how they believe.
Always go after your heart.

#52 먹튀검증 on 06.08.19 at 12:52 am

Pretty section of content. I just stumbled upon your weblog and in accession capital to
assert that I get actually enjoyed account
your blog posts. Anyway I will be subscribing to your augment and even I achievement you
access consistently fast.

#53 안전놀이터 홍보 on 06.08.19 at 10:10 pm

It is appropriate time to make some plans for the future and it's time to be
happy. I've read this post and if I could I desire to suggest you some
interesting things or advice. Perhaps you can write
next articles referring to this article. I wish to read even more things about it!

#54 릴게임 바다이야기 on 06.08.19 at 10:16 pm

Right here is the right website for anyone who would like to understand this topic.
You know so much its almost hard to argue with you (not that
I personally will need to…HaHa). You certainly put a brand new
spin on a topic that has been written about for years.
Excellent stuff, just great!

#55 토토사이트 운영비용 on 06.08.19 at 10:30 pm

Do you mind if I quote a couple of your articles as
long as I provide credit and sources back to your site?

My website is in the very same niche as yours and my users would certainly benefit from a lot of
the information you provide here. Please let me know if
this okay with you. Appreciate it!

#56 토토사이트 추천 on 06.08.19 at 10:32 pm

Hi there everyone, it's my first visit at this web site, and piece
of writing is really fruitful for me, keep up posting these types of posts.

#57 검증사이트 on 06.08.19 at 11:15 pm

Hello there, I found your site via Google even as searching
for a related subject, your web site came up, it seems good.
I have bookmarked it in my google bookmarks.
Hello there, just become alert to your blog thru Google, and located that it's really informative.
I'm going to be careful for brussels. I'll be grateful when you continue this
in future. A lot of folks shall be benefited from your writing.

Cheers!

#58 SEO on 06.17.19 at 6:23 pm

Yes! Finally someone writes about SEO.

#59 epidian deco on 06.17.19 at 6:29 pm

My spouse and I stumbled over here by a different website and thought
I may as well check things out. I like what I see so i am just following
you. Look forward to exploring your web page again.

#60 vn hax pubg on 06.17.19 at 6:58 pm

Found this on bing and I’m happy I did. Well written website.

#61 먹튀검증커뮤니티 on 06.18.19 at 4:53 am

If some one desires expert view on the topic of blogging and
site-building then i propose him/her to go to see
this webpage, Keep up the nice work.

#62 proxo key on 06.19.19 at 4:15 pm

I dugg some of you post as I thought they were very beneficial invaluable

#63 vn hax pubg on 06.21.19 at 12:42 am

I really got into this web. I found it to be interesting and loaded with unique points of view.

#64 nonsense diamond download on 06.21.19 at 1:47 pm

I like, will read more. Thanks!

#65 star valor cheats on 06.23.19 at 10:58 pm

I have interest in this, xexe.

#66 web development services Stockport on 06.24.19 at 5:28 pm

When someone writes an paragraph he/she retains the thought of a user
in his/her brain that how a user can understand it.
Therefore that's why this post is perfect. Thanks!

#67 gx tool apk download on 06.24.19 at 8:46 pm

Yeah bookmaking this wasn’t a risky decision outstanding post! .

#68 fortnite mods on 06.26.19 at 1:38 am

I must say, as a lot as I enjoyed reading what you had to say, I couldnt help but lose interest after a while.

#69 Lakseolie hund on 06.26.19 at 8:43 am

I really enjoy the article post. Great.

#70 skisploit on 06.26.19 at 11:54 am

Enjoyed examining this, very good stuff, thanks .

#71 ispoofer on 06.27.19 at 10:41 am

Appreciate it for this howling post, I am glad I observed this internet site on yahoo.

#72 synapse x serial key on 06.28.19 at 2:05 am

Thank You for this.

#73 betrebels wincatchers on 06.28.19 at 1:47 pm

Hey just wanted to give you a quick heads up. The
words in your article seem to be running off the screen in Ie.
I'm not sure if this is a format issue or something to do with internet browser compatibility but I thought
I'd post to let you know. The design and style look great though!

Hope you get the issue solved soon. Kudos

#74 Foster on 06.29.19 at 5:42 pm

If you want to improve your know-how only keep visiting this site
and be updated with the most up-to-date news posted
here.

#75 water damage restoration dehumidifier on 06.30.19 at 1:06 pm

I'm not sure why but this website is loading very slow for me.
Is anyone else having this problem or is it a issue on my end?
I'll check back later and see if the problem still exists.

#76 Warner on 06.30.19 at 2:58 pm

Hi! I know this is kinda off topic but I was wondering if you knew where
I could get a captcha plugin for my comment form?
I'm using the same blog platform as yours and I'm having difficulty finding one?
Thanks a lot!

#77 cheat fortnite download no wirus on 07.01.19 at 11:41 pm

I like this site because so much useful stuff on here : D.

#78 http://netflix-premium-account-f40506.educationalimpactblog.com/ on 07.02.19 at 12:48 pm

Nice blog here! Also your web site loads up fast! What host are you using?
Can I get your affiliate link to your host?
I wish my site loaded up as quickly as yours lol

#79 Krystal on 07.04.19 at 1:03 am

You could definitely see your expertise in the article you write.
The arena hopes for even more passionate writers such as you who
aren't afraid to mention how they believe. At all times go after your heart.

#80 seo wiki on 07.04.19 at 1:56 pm

Parasite backlink SEO works well :)

#81 phantom forces aimbot on 07.04.19 at 10:55 pm

I really enjoy examining on this website , it has got cool goodies .

#82 dego pubg hack on 07.05.19 at 11:30 am

Yeah bookmaking this wasn’t a risky decision outstanding post! .

#83 Soon on 07.05.19 at 12:03 pm

Do you have a spam issue on this site; I also am a blogger, and I was curious
about your situation; we have created some
nice practices and we are looking to swap strategies with others, be
sure to shoot me an e-mail if interested.

#84 England vs Australia Live Score on 07.05.19 at 11:44 pm

Hey there! I know this is kinda off topic but I was wondering
if you knew where I could find a captcha plugin for my comment form?

I'm using the same blog platform as yours and I'm having problems finding one?
Thanks a lot!

#85 사설토토 on 07.06.19 at 3:10 am

Unquestionably imagine that which you stated. Your favorite justification appeared to be at the net the easiest thing
to be mindful of. I say to you, I definitely get irked at the same time as folks consider worries that they plainly don't
know about. You controlled to hit the nail upon the top as neatly as defined out the whole thing with no need side
effect , other people could take a signal.
Will likely be back to get more. Thank you

#86 gx tool pubg uc hack on 07.06.19 at 2:52 pm

I am glad to be one of the visitors on this great website (:, appreciate it for posting .

#87 شقق للايجار on 07.06.19 at 3:00 pm

Amazing blog! Is your theme custom made or did
you download it from somewhere? A design like yours
with a few simple tweeks would really make my blog stand out.
Please let me know where you got your theme. Thanks a lot

#88 Ina Gadget on 07.07.19 at 1:18 pm

This post is invaluable. Where can I find out more?

#89 cod black ops 4 license key free on 07.07.19 at 1:56 pm

This is amazing!

#90 Logo design on 07.08.19 at 12:44 pm

Link exchange is nothing else but it is simply placing the other person's website link on your page
at proper place and other person will also do same in favor of you.

#91 online mastering on 07.08.19 at 1:52 pm

Thanks for the good writeup. It in truth was once a
leisure account it. Glance advanced to more brought agreeable from you!
However, how could we keep up a correspondence?

#92 spyhunter 5.4.2.101 key on 07.08.19 at 3:07 pm

Intresting, will come back here once in a while.

#93 Jerome on 07.09.19 at 12:12 am

I am actually thankful to the holder of this web page who has shared this fantastic paragraph at here.

#94 herb shop Houston on 07.09.19 at 12:22 pm

I must thank you for the efforts you've put in writing this blog.

I am hoping to see the same high-grade content by you later
on as well. In fact, your creative writing abilities has encouraged me to get my own, personal site now
;)

#95 fps unlocker download on 07.09.19 at 5:14 pm

Thank You for this.

#96 coco melody on 07.10.19 at 10:48 am

Your mode of telling all in this post is actually
good, all be capable of easily know it, Thanks a lot.

#97 bed and breakfast Scotland on 07.10.19 at 2:30 pm

whoah this weblog is excellent i love reading your articles.
Stay up the good work! You recognize, lots of individuals are hunting around
for this info, you can aid them greatly.

#98 consultorseo71481.timeblog.net on 07.12.19 at 1:46 pm

I just like the valuable information you supply in your articles.
I will bookmark your weblog and test once more here regularly.
I am somewhat sure I will be informed lots of new stuff proper here!
Good luck for the following!

#99 http://consultor-de-seo26936.aioblogs.com/15298713/why-a-faculty-management-process-retains-much-more-relevance-than-we-expect-it-does on 07.12.19 at 11:27 pm

Hey! This is kind of off topic but I need some advice from an established blog.
Is it very difficult to set up your own blog?
I'm not very techincal but I can figure things out pretty fast.
I'm thinking about creating my own but I'm not sure where to begin. Do you
have any ideas or suggestions? Thank you

#100 먹튀검증 on 07.13.19 at 2:24 am

Having read this I thought it was extremely enlightening.
I appreciate you spending some time and effort to put this information together.
I once again find myself personally spending a
lot of time both reading and leaving comments. But so what, it was still worthwhile!

#101 eva on 07.14.19 at 2:08 am

Thank you for the great read!

#102 Activated Charcoal Teeth Whitening Strips on 07.14.19 at 11:38 am

It's perfect time to make some plans for the future and it's time to be happy.

I have read this post and if I could I want to suggest you few interesting things or tips.
Maybe you can write next articles referring to this article.

I desire to read more things about it!

#103 fucking live show on 07.15.19 at 2:23 am

some great ideas this gave me!

#104 4A hair on 07.15.19 at 11:53 pm

Appreciate this post. Will try it out.

#105 legalporno free on 07.16.19 at 12:35 am

great advice you give

#106 scrap catalytic converter price guide on 07.16.19 at 3:00 am

Admiring the time and effort you put into your blog and detailed information you offer.
It's nice to come across a blog every once in a while that isn't the
same unwanted rehashed material. Wonderful read! I've bookmarked your site and I'm adding your RSS feeds to my Google account.

#107 Georgia on 07.16.19 at 11:25 pm

Fastidious respond in return of this issue with firm arguments
and explaining the whole thing about that.

#108 Ferdinand Inaba on 07.18.19 at 1:31 am

Alex9, this message is your next piece of information. Please contact the agency at your convenience. No further information until next transmission. This is broadcast #4649. Do not delete.

#109 jemma_valentine on 07.19.19 at 2:13 am

thx you so much for posting this!

#110 diana on 07.19.19 at 2:15 am

just what I needed to read

#111 buy drugs online on 07.19.19 at 3:06 am

This blog is amazing! Thank you.

#112 buydrugonline on 07.19.19 at 3:07 am

This blog is amazing! Thank you.

#113 Melvin on 07.20.19 at 4:02 pm

I have learn several just right stuff here. Definitely price bookmarking for revisiting.
I surprise how a lot effort you set to make this kind of fantastic informative website.

#114 prodigy hacked on 07.21.19 at 8:04 pm

I’m impressed, I have to admit. Genuinely rarely should i encounter a weblog that’s both educative and entertaining, and let me tell you, you may have hit the nail about the head. Your idea is outstanding; the problem is an element that insufficient persons are speaking intelligently about. I am delighted we came across this during my look for something with this.

#115 Doretha on 07.22.19 at 3:01 pm

Outstanding story there. What occurred after?
Good luck!

#116 20 yard dumpster on 07.23.19 at 1:25 am

Hi there to every body, it's my first pay a quick visit of
this web site; this website carries remarkable and genuinely fine stuff in support of visitors.

#117 acid swapper on 07.23.19 at 7:34 pm

Good, this is what I was browsing for in google

#118 daxte cougar on 07.23.19 at 10:58 pm

I am 43 years old and a mother this helped me!

#119 date cougag on 07.23.19 at 10:59 pm

I am 43 years old and a mother this helped me!

#120 descargar peliculas on 07.23.19 at 11:22 pm

Keep on working, great job!

#121 dats cougar on 07.23.19 at 11:36 pm

I am 43 years old and a mother this helped me!

#122 dafte cougar on 07.23.19 at 11:55 pm

I am 43 years old and a mother this helped me!

#123 date coutgar on 07.24.19 at 12:03 am

I am 43 years old and a mother this helped me!

#124 adb.com file scavenger 5.3 crack on 07.24.19 at 7:45 pm

Just wanna input on few general things, The website layout is perfect, the articles is very superb : D.

#125 guest house in Windhoek on 07.25.19 at 2:38 am

Hi there! I realize this is sort of off-topic but I needed to ask.
Does operating a well-established website like yours require
a massive amount work? I am completely new to running a blog however I do write in my diary everyday.
I'd like to start a blog so I can easily share my personal experience
and feelings online. Please let me know if you have any recommendations
or tips for new aspiring blog owners. Thankyou!

#126 self catering accommodation in Windhoek on 07.25.19 at 4:49 am

I would like to thank you for the efforts you've put in penning this site.
I'm hoping to view the same high-grade blog posts from
you later on as well. In truth, your creative writing
abilities has encouraged me to get my very own blog
now ;)

#127 skisploit on 07.25.19 at 10:46 pm

Intresting, will come back here more often.

#128 Rosemary on 07.26.19 at 11:47 am

Very nice post. I simply stumbled upon your weblog and wanted to say that I have truly loved surfing
around your blog posts. After all I'll be subscribing in your feed and I hope you write once
more very soon!

#129 토토사이트 on 07.26.19 at 3:11 pm

I simply couldn't depart your site prior to suggesting that I
really loved the usual info an individual provide to
your guests? Is gonna be back incessantly in order
to check up on new posts

#130 skisploit on 07.27.19 at 12:03 am

Enjoyed reading through this, very good stuff, thankyou .

#131 비아그라 on 07.27.19 at 4:16 pm

I'm truly enjoying the design and layout of your site. It's a very easy on the eyes which makes it much more pleasant for
me to come here and visit more often. Did you hire out a designer
to create your theme? Great work!

#132 먹튀검증 on 07.28.19 at 12:16 am

It's the best time to make some plans for the future
and it's time to be happy. I've read this post and if I could I desire to
suggest you few interesting things or suggestions.

Perhaps you could write next articles referring to this article.
I wish to read even more things about it!

#133 Criminal Law Dewsbury on 07.28.19 at 1:34 am

It's amazing in favor of me to have a site, which is good in favor of
my experience. thanks admin

#134 Josh on 07.28.19 at 3:14 pm

Its such as you learn my mind! You seem to grasp a lot about
this, like you wrote the e-book in it or something.
I believe that you simply can do with some % to drive the message
home a little bit, however other than that, that is
magnificent blog. A great read. I will certainly be back.

#135 Tahiti on 07.29.19 at 2:20 am

Hi! I understand this is somewhat off-topic however I had to
ask. Does building a well-established blog like yours require a massive
amount work? I'm completely new to blogging however I do
write in my diary every day. I'd like to start a blog so I will be able to share my personal experience and views online.
Please let me know if you have any ideas or tips for brand new aspiring blog owners.
Thankyou!

#136 binance on 07.29.19 at 3:30 pm

Hi, always i used to check webpage posts here early in the break of day, because i enjoy
to gain knowledge of more and more.

#137 booking on 07.29.19 at 11:47 pm

I was suggested this website by my cousin. I am not sure
whether this post is written by him as no one else know such detailed about
my difficulty. You are wonderful! Thanks!

#138 booking on 07.30.19 at 12:31 am

Good day! I know this is kinda off topic but I was wondering which blog platform are you using for this website?
I'm getting tired of WordPress because I've had issues with hackers and I'm looking at alternatives for another
platform. I would be fantastic if you could point me in the direction of a good platform.