2010-04-02 09:54:09 8 Comments

I have a 128-bit unsigned integer A and a 64-bit unsigned integer B. What's the fastest way to calculate `A % B`

- that is the (64-bit) remainder from dividing A by B?

I'm looking to do this in either C or assembly language, but I need to target the 32-bit x86 platform. This unfortunately means that I cannot take advantage of compiler support for 128-bit integers, nor of the x64 architecture's ability to perform the required operation in a single instruction.

**Edit:**

Thank you for the answers so far. However, it appears to me that the suggested algorithms would be quite slow - wouldn't the fastest way to perform a 128-bit by 64-bit division be to leverage the processor's native support for 64-bit by 32-bit division? Does anyone know if there is a way to perform the larger division in terms of a few smaller divisions?

**Re: How often does B change?**

Primarily I'm interested in a general solution - what calculation would you perform if A and B are likely to be different every time?

However, a second possible situation is that B does not vary as often as A - there may be as many as 200 As to divide by each B. How would your answer differ in this case?

### Related Questions

#### Sponsored Content

#### 54 Answered Questions

### [SOLVED] How to count the number of set bits in a 32-bit integer?

**2008-09-20 19:04:38****Matt Howells****534127**View**872**Score**54**Answer- Tags: algorithm binary bit-manipulation hammingweight iec10967

#### 2 Answered Questions

### [SOLVED] An efficient way to do basic 128 bit integer calculations in C++?

**2014-12-02 23:59:05****Matteo Monti****5518**View**2**Score**2**Answer- Tags: c++ assembly x86 intel-edison int128

#### 26 Answered Questions

### [SOLVED] Check if all elements in a list are identical

**2010-10-02 07:31:11****max****328661**View**395**Score**26**Answer- Tags: python algorithm comparison

#### 10 Answered Questions

### [SOLVED] Replacing a 32-bit loop counter with 64-bit introduces crazy performance deviations with _mm_popcnt_u64 on Intel CPUs

**2014-08-01 10:33:29****gexicide****160182**View**1434**Score**10**Answer- Tags: c++ performance assembly x86 compiler-optimization

#### 1 Answered Questions

#### 6 Answered Questions

#### 2 Answered Questions

### [SOLVED] How to add 2^63 to a signed 64-bit integer and cast it to a unsigned 64-bit integer without using 128-bit integer in the middle

**2016-06-30 10:40:44****Alvar****390**View**3**Score**2**Answer- Tags: c

#### 4 Answered Questions

### [SOLVED] What is the fastest integer division supporting division by zero no matter what the result is?

**2013-05-27 16:52:36****philipp****6319**View**109**Score**4**Answer- Tags: c++ c optimization divide-by-zero

#### 1 Answered Questions

### [SOLVED] Assembly: division using the values in two 32-bit registers as if they were one 64-bit integer

**2014-03-26 02:08:22****antipistachio****2540**View**2**Score**1**Answer- Tags: assembly x86 gas integer-division cpu-registers

## 13 comments

## @Peter Cordes 2016-09-01 08:22:58

I know the question specified 32-bit code, but the answer for 64-bit may be useful or interesting to others.And yes, 64b/32b => 32b division does make a useful building-block for 128b % 64b => 64b. libgcc's

`__umoddi3`

(source linked below) gives an idea of how to do that sort of thing, but it only implements 2N % 2N => 2N on top of a 2N / N => N division, not 4N % 2N => 2N.Wider multi-precision libraries are available, e.g. https://gmplib.org/manual/Integer-Division.html#Integer-Division.

GNU C on 64-bit machinesdoes provide an`__int128`

type, and libgcc functions to multiply and divide as efficiently as possible on the target architecture.x86-64's

`div r/m64`

instruction does 128b/64b => 64b division (also producing remainder as a second output), but it faults if the quotient overflows. So you can't directly use it if`A/B > 2^64-1`

, but you can get gcc to use it for you (or even inline the same code that libgcc uses).This compiles (Godbolt compiler explorer) to one or two

`div`

instructions (which happen inside a libgcc function call). If there was a faster way, libgcc would probably use that instead.The

`__umodti3`

function it calls calculates a full 128b/128b modulo, but the implementation of that function does check for the special case where the divisor's high half is 0, as you can see in the libgcc source. (libgcc builds the si/di/ti version of the function from that code, as appropriate for the target architecture.`udiv_qrnnd`

is an inline asm macro that does unsigned 2N/N => N division for the target architecture.For x86-64(and other architectures with a hardware divide instruction),the fast-path(when`high_half(A) < B`

; guaranteeing`div`

won't fault)is just two not-taken branches, some fluff for out-of-order CPUs to chew through,and a singleon modern x86 CPUs, according to Agner Fog's insn tables. Some other work can be happening in parallel with`div r64`

instruction, which takes about 50-100 cycles^{1}`div`

, but the integer divide unit is not very pipelined and`div`

decodes to a lot of uops (unlike FP division).The fallback path still only uses two 64-bit

`div`

instructions for the case where`B`

is only 64-bit, but`A/B`

doesn't fit in 64 bits so`A/B`

directly would fault.Note that libgcc's

`__umodti3`

just inlines`__udivmoddi4`

into a wrapper that only returns the remainder.Footnote 1: 32-bit

`div`

is over 2x faster on Intel CPUs. On AMD CPUs, performance only depends on the size of the actual input values, even if they're small values in a 64-bit register. If small values are common, it might be worth benchmarking a branch to a simple 32-bit division version before doing 64-bit or 128-bit division.## For repeated modulo by the same

`B`

It might be worth considering calculating a fixed-point multiplicative inverse for

`B`

, if one exists. For example, with compile-time constants, gcc does the optimization for types narrower than 128b.x86's

`mul r64`

instruction does 64b*64b => 128b (rdx:rax) multiplication, and can be used as a building block to construct a 128b * 128b => 256b multiply to implement the same algorithm. Since we only need the high half of the full 256b result, that saves a few multiplies.Modern Intel CPUs have very high performance

`mul`

: 3c latency, one per clock throughput. However, the exact combination of shifts and adds required varies with the constant, so the general case of calculating a multiplicative inverse at run-time isn't quite as efficient each time its used as a JIT-compiled or statically-compiled version (even on top of the pre-computation overhead).IDK where the break-even point would be. For JIT-compiling, it will be higher than ~200 reuses, unless you cache generated code for commonly-used

`B`

values. For the "normal" way, it might possibly be in the range of 200 reuses, but IDK how expensive it would be to find a modular multiplicative inverse for 128-bit / 64-bit division.libdivide can do this for you, but only for 32 and 64-bit types. Still, it's probably a good starting point.

## @caf 2010-04-02 12:15:19

You can use the division version of Russian Peasant Multiplication.

To find the remainder, execute (in pseudo-code):

The modulus is left in A.

You'll need to implement the shifts, comparisons and subtractions to operate on values made up of a pair of 64 bit numbers, but that's fairly trivial (likely you should implement the left-shift-by-1 as

`X + X`

).This will loop at most 255 times (with a 128 bit A). Of course you need to do a pre-check for a zero divisor.

## @chux - Reinstate Monica 2016-08-31 18:49:10

Code has bug. Interesting that it was not reported in

6years. Try`A=2, B=1`

goes to infinite loop.`0x8711dd11 mod 0x4388ee88`

fails (result s/b 1, not 0x21c47745) as well as others. Suggest`while (X < A/2)`

-->`while (X <= A/2)`

to repair. Your pseudo code as tested`unsigned cafMod(unsigned A, unsigned B) { assert(B); unsigned X = B; while (X < A / 2) { X <<= 1; } while (A >= B) { if (A >= X) A -= X; X >>= 1; } return A; }`

## @caf 2016-09-01 01:13:31

@chux: You're absolutely right, fixed. It probably wasn't reported earlier because it only happens when A = 2ⁿ B or A = 2ⁿ B + 1. Thanks!

## @Peter Cordes 2019-07-17 14:52:50

Yup, in x86 asm implementing

`x<<=1`

as`add lo,lo`

/`adc mid,mid`

/... is more efficient than`shl lo`

/`rcl mid,1`

/... But in C the compiler should do that for you. Of course in x86 asm, you should actually use`bsr`

(bit-scan) or`lzcnt`

(leading-zero count) to find the position of the highest set bit, then use`shld hi, mid2, cl`

/ ... /`shl low, cl`

to do all the shifting in one step instead of looping for that first`while (x <= A/2)`

loop. In 32-bit mode, using SSE2 for XMM SIMD shifts with 64-bit elements is tempting, especially to reduce branching for leading-zero counts >= 32## @CookieNinja 2016-11-03 10:49:50

If 128-bit unsigned by 63-bit unsigned is good enough, then it can be done in a loop doing at most 63 cycles.

Consider this a proposed solution MSNs' overflow problem by limiting it to 1-bit. We do so by splitting the problem in 2, modular multiplication and adding the results at the end.

In the following example upper corresponds to the most significant 64-bits, lower to the least significant 64-bits and div is the divisor.

The only problem is that, if the divisor is 64-bits then we get overflows of 1-bit (loss of information) giving a faulty result.

It bugs me that I haven't figured out a neat way to handle the overflows.

## @chux - Reinstate Monica 2016-08-31 19:00:43

The accepted answer by @caf was real nice and highly rated, yet it contain a bug not seen for years.

To help test that and other solutions, I am posting a test harness and making it community wiki.

## @Accipitridae 2010-04-06 21:40:36

The solution depends on what exactly you are trying to solve.

E.g. if you are doing arithmetic in a ring modulo a 64-bit integer then using Montgomerys reduction is very efficient. Of course this assumes that you the same modulus many times and that it pays off to convert the elements of the ring into a special representation.

To give just a very rough estimate on the speed of this Montgomerys reduction: I have an old benchmark that performs a modular exponentiation with 64-bit modulus and exponent in 1600 ns on a 2.4Ghz Core 2. This exponentiation does about 96 modular multiplications (and modular reductions) and hence needs about 40 cycles per modular multiplication.

## @user200783 2010-04-10 10:48:04

The wikipedia article describes using Montgomery reduction to increase the efficiency of modular multiplication (and, by extension, modular exponentiation). Do you know if the technique still applies in a situation where there are a large number of modular additions as well as multiplications?

## @Accipitridae 2010-04-12 18:33:55

Addition is done as usual. If both summands are in Montgomery representation then adding them together gives their sum in Montgomery representation. If this sum is larger than the modulus, just subtract the modulus.

## @Maciej Hehl 2010-04-10 23:00:11

I'd like to share a few thoughts.

It's not as simple as MSN proposes I'm afraid.

In the expression:

both multiplication and addition may overflow. I think one could take it into account and still use the general concept with some modifications, but something tells me it's going to get really scary.

I was curious how 64 bit modulo operation was implemented in MSVC and I tried to find something out. I don't really know assembly and all I had available was Express edition, without the source of VC\crt\src\intel\llrem.asm, but I think I managed to get some idea what's going on, after a bit of playing with the debugger and disassembly output. I tried to figure out how the remainder is calculated in case of positive integers and the divisor >=2^32. There is some code that deals with negative numbers of course, but I didn't dig into that.

Here is how I see it:

If divisor >= 2^32 both the dividend and the divisor are shifted right as much as necessary to fit the divisor into 32 bits. In other words: if it takes n digits to write the divisor down in binary and n > 32, n-32 least significant digits of both the divisor and the dividend are discarded. After that, the division is performed using hardware support for dividing 64 bit integers by 32 bit ones. The result might be incorrect, but I think it can be proved, that the result may be off by at most 1. After the division, the divisor (original one) is multiplied by the result and the product subtracted from the dividend. Then it is corrected by adding or subtracting the divisor if necessary (if the result of the division was off by one).

It's easy to divide 128 bit integer by 32 bit one leveraging hardware support for 64-bit by 32-bit division. In case the divisor < 2^32, one can calculate the remainder performing just 4 divisions as follows:

Let's assume the dividend is stored in:

the remainder will go into:

After those 4 steps the variable remainder will hold what You are looking for. (Please don't kill me if I got the endianess wrong. I'm not even a programmer)

In case the divisor is grater than 2^32-1 I don't have good news. I don't have a complete proof that the result after the shift is off by no more than 1, in the procedure I described earlier, which I believe MSVC is using. I think however that it has something to do with the fact, that the part that is discarded is at least 2^31 times less than the divisor, the dividend is less than 2^64 and the divisor is greater than 2^32-1, so the result is less than 2^32.

If the dividend has 128 bits the trick with discarding bits won't work. So in general case the best solution is probably the one proposed by GJ or caf. (Well, it would be probably the best even if discarding bits worked. Division, multiplication subtraction and correction on 128 bit integer might be slower.)

I was also thinking about using the floating point hardware. x87 floating point unit uses 80 bit precision format with fraction 64 bits long. I think one can get the exact result of 64 bit by 64 bit division. (Not the remainder directly, but also the remainder using multiplication and subtraction like in the "MSVC procedure"). IF the dividend >=2^64 and < 2^128 storing it in the floating point format seems similar to discarding least significant bits in "MSVC procedure". Maybe someone can prove the error in that case is bound and find it useful. I have no idea if it has a chance to be faster than GJ's solution, but maybe it's worth it to try.

## @GJ. 2010-04-11 07:03:50

I think your thinking is more or less correct. Yes the idea about using x87 double-precision floating point division is also known, but the x87 only support 63bit division because the 64th bit is reserved for mantissa sign according: IEEE Standard 754 for Binary Floating-Point Arithmetic.

## @Maciej Hehl 2010-04-11 10:55:11

I was talking about the Double-Extended format supported by x87. In double format the fraction is only 53 bits long. In the extended one the fraction or rather the significand is 64 bits long. There is a difference between this format and the smaller ones. In extended format the leading bit of the significand is explicit unlike in double or single ones, but I don't think it changes much. It should be possible to store exactly 64 bit integers in this format. The sign is stored in bit 79 in extended format.

## @GJ. 2010-04-11 16:13:45

I have check the IEEE Standard and you are right. The mantisa sign is stored in last byte.

## @Rudy Velthuis 2013-03-14 12:30:43

What you describe is the so called base case division as described by Knuth in his algorithm D (TAOCP Vol. 2). It relies on the fact that if you divide the top two "digits" of the dividend by the top digit of the divisor, the result is off by at most 2. You test this by subtracting the result * divisor from the dividend/remainder and see if it is negative. If so, you add the divisor and correct the quotient until the remainder is positive again. Then you loop for the next lower digit etc.

## @chux - Reinstate Monica 2016-08-31 20:03:43

Agree

`(((AH % B) * ((2^64 - B) % B)) + (AL % B)) % B`

has problems## @GJ. 2010-04-10 12:03:04

I have made both version of Mod128by64 'Russian peasant' division function: classic and speed optimised. Speed optimised can do on my 3Ghz PC more than 1000.000 random calculations per second and is more than three times faster than classic function. If we compare the execution time of calculating 128 by 64 and calculating 64 by 64 bit modulo than this function is only about 50% slower.

Classic Russian peasant:Speed optimised Russian peasant:## @Peter Cordes 2018-04-15 04:41:18

On modern Intel CPUs,

`rcl reg,1`

is 3 uops, but`adc reg,reg`

reads and writes CF and ZF identically for only 1 uop since Broadwell, or 2 uops on Haswell and earlier. Similarly,`shl bl,1`

could be`add bl,bl`

. The only advantage there is running on more ports (not the shifter port(s)), which might not be a bottleneck. (`add same,same`

is of course a left-shift because`x*2 = x+x`

, putting the carry-out in CF.`adc same,same`

does that and also adds the input CF, setting the low bit just like RCL.) AMD has fast`rcl`

-by-1, though. agner.org/optimize## @Sparky 2010-04-08 11:03:55

As a general rule, division is slow and multiplication is faster, and bit shifting is faster yet. From what I have seen of the answers so far, most of the answers have been using a brute force approach using bit-shifts. There exists another way. Whether it is faster remains to be seen (AKA profile it).

Instead of dividing, multiply by the reciprocal. Thus, to discover A % B, first calculate the reciprocal of B ... 1/B. This can be done with a few loops using the Newton-Raphson method of convergence. To do this well will depend upon a good set of initial values in a table.

For more details on the Newton-Raphson method of converging on the reciprocal, please refer to http://en.wikipedia.org/wiki/Division_(digital)

Once you have the reciprocal, the quotient Q = A * 1/B.

The remainder R = A - Q*B.

To determine if this would be faster than the brute force (as there will be many more multiplies since we will be using 32-bit registers to simulate 64-bit and 128-bit numbers, profile it.

If B is constant in your code, you can pre-calculate the reciprocal and simply calculate using the last two formulae. This, I am sure will be faster than bit-shifting.

Hope this helps.

## @supercat 2013-05-16 18:34:47

Another approach which may sometimes be even better if e.g. the divisor is 2^64-k for some relatively small k, and the dividend is less than 2^128/k, is to add k to the input value, capture and zero the top 64 bits of the dividend, multiply the captured value by k (for a 96-bit or 128-bit result), and add that to the lower 64 bits of the dividend. If the result is greater than 2^64, repeat. Once the result is less than 2^64, subtract k. For values of k below 2^32 (half the divisor size), two capture-zero-multiply-subtract sequences should suffice.

## @Craig McQueen 2013-09-02 23:38:13

The question is about integer calculations. What if

`1/B`

(or in integer form,`2^64/B`

or`2^128/B`

) doesn't have an exact integer representation?## @GJ. 2010-04-06 04:37:15

This is almost untested partly speed modificated Mod128by64 'Russian peasant' algorithm function. Unfortunately I'm a Delphi user so this function works under Delphi. :) But the assembler is almost the same so...

At least one more speed optimisation is possible! After 'Huge Divisor Numbers Shift Optimisation' we can test divisors high bit, if it is 0 we do not need to use extra bh register as 65th bit to store in it. So unrolled part of loop can look like:

## @Dale Hagglund 2010-04-06 06:49:32

Perhaps you're looking for a finished program, but the basic algorithms for multi-precision arithmetic can be found in Knuth's Art of Computer Programming, Volume 2. You can find the division algorithm described online here. The algorithms deal with arbitrary multi-precision arithmetic, and so are more general than you need, but you should be able to simplify them for 128 bit arithmetic done on 64- or 32-bit digits. Be prepared for a reasonable amount of work (a) understanding the algorithm, and (b) converting it to C or assembler.

You might also want to check out Hacker's Delight, which is full of very clever assembler and other low-level hackery, including some multi-precision arithmetic.

## @user200783 2010-04-10 12:40:42

Thanks, I think I understand how the algorithms described at sputsoft.com apply to this situation. AFAICT, Algorithm G shows how to perform an mb-bit by nb-bit division as a series of m-n+1 (n+1)b-bit by nb-bit divisions, where b is the number of bits per digit. Algorithm Q then shows how to perform each of these (n+1)b-bit by nb-bit divisions as a single 2b-bit by b-bit division. Given that the largest dividend we can handle is 64-bit, we need to set b=32. The algorithms thus break down our 128-bit by 64-bit division (m=4, n=2) into 3 64-bit by 32-bit divisions. Does this sound accurate?

## @Dale Hagglund 2010-04-10 14:36:57

I can tell you've already put more detailed thought into the algorithms than I did when I posted my reply, so I can't say for sure whether your final count of division operations is right. However, I do think you've got the basic idea of how to proceed.

## @Dale Hagglund 2010-04-10 14:42:58

Another thought: you might want to consider 16-bit digits if you're writing in C and hence don't have direct access to 32b x 32b -> 64b multiply instructions, or don't want to embed your 32-bit digits into a 64-bit integer and use the compiler's own builtin 64-bit arithmetic. I can't think of a strong reason to avoid the latter, but you might want to check out the generated assembly code for it, if you're really, really, really concerned about speed.

## @Craig McQueen 2013-09-04 04:50:14

That sputsoft link seems to be invalid now. Not sure why—the site is still there. This page seems to be connected, in that the

`kanooth-numbers`

library was once called`sputsoftnumbers`

.## @Cheng Sun 2017-07-30 20:53:06

The sputsoft page is now located here: janmr.com/blog/2009/08/…

## @MSN 2010-04-05 03:31:23

Given

`A = AH*2^64 + AL`

:If your compiler supports 64-bit integers, then this is probably the easiest way to go. MSVC's implementation of a 64-bit modulo on 32-bit x86 is some hairy loop filled assembly (

`VC\crt\src\intel\llrem.asm`

for the brave), so I'd personally go with that.## @GJ. 2010-04-05 08:02:31

No, as Paul sed the target is 32-bit x86 platform. Intel CPUs under IA32 doesn't support 64 bit division or 128 bit multiplication this only possible in 64 bit CPU mode. In that case the method described by caf is much faster!

## @MSN 2010-04-05 16:21:15

@GJ, if the compiler supports 64-bit integers, it will be easier to just use the mod operation for 64-bit integers. caf's method is the one used by MSVC anyway for 32-bit x86, based on my cursory evaluation of the assembly. It also includes an optimization for dividends below 2^32. So you could either code it yourself or just use the existing compiler support.

## @GJ. 2010-04-05 17:03:19

@MNS, jep you are right it will be easier, but demand is speed! Optimisation for dividends below 2^32 isn't useful if you are using random (full spectre) UInt64 because the ratio between 2^32 and 2^64 numbers is very, very small.

## @MSN 2010-04-05 18:56:15

@GJ, Yes, it's 1/2^32. If the optimal way to divide with a 64-bit dividend requires a bunch of branching anyways (which it does), adding an extra branch for < 2^32 is not going to impact performance.

## @Billy ONeal 2010-04-06 06:15:40

This is also nice because you get a nice perf boost for free if/when you move to x86_64.

## @user200783 2010-04-10 12:54:21

I'm not sure I understand how this works. B is 64-bit, so (AH % B) and ((2^64 - B) % B)) will both be 64-bit. Won't multiplying these together give us a 128-bit number, thus leaving us still needing to perform a 128-bit by 64-bit modulo?

## @user200783 2010-04-10 14:02:50

Thanks for the idea to look at how compilers implement 64-bit by 64-bit modulo on x86. From what I can tell, neither GCC (the function __udivmoddi4 in libgcc2.c) nor MSVC (see ullrem.asm for the unsigned version) use caf's "Russian Peasant" method. Instead, they both seem to use a variation on algorithm Q in the link provided by Dale Hagglund (with n=2, b=32) - approximating the 64-bit by 64-bit division using a 64-bit by 32-bit division, then performing a slight adjustment to correct the result if necessary.

## @drawnonward 2010-04-11 00:15:19

This would be a highly recursive algorithm.

## @chux - Reinstate Monica 2016-08-31 20:01:28

Problem with this approach: The

`*`

multiplication needs a 128-bit result making the last step`some_128_bit_positive_value % some_128_bit_positive_value`

and we are back where we started. Try 0x8000_0000_0000_0000_0000_0000_0000_0000 mod 0xFFFF_FFFF_FFFF_FFFE. I'd say the answer should be 2, but your algorithm gives 0, (Assuming the product of your multiplication is modulo 64-bit). This code does work for "128-bit integer modulo a 32-bit integer". Perhaps my testing is wrong, but I'd like to know the result of your testing.## @chux - Reinstate Monica 2016-08-31 20:09:20

Meant to say "making the last step some_128_bit_positive_value % some_64_bit_positive_value"

## @Peter Cordes 2016-09-01 09:01:29

@chux: I agree the answer should be

`2`

for`0x80000000000000000000000000000000 % 0xFFFFFFFFFFFFFFFE`

. I tested it in`calc`

, the cmdline arbitrary-precision calculator. I confirmed that truncating to 64 bits (with a bitwise AND with (2^64-1)) breaks the formula, so it does essentially leave you at square 1.`(((AH % B) * ((2^64 - B) % B))&(2^64-1) + (AL % B))&(2^64-1) % B == 0`

but`(((AH % B) * ((2^64 - B) % B)) + (AL % B)) % B == 2`

. I used`AH=A>>64`

and`AL=0`

.## @Adam Shiemke 2010-04-02 20:08:47

If you have a recent x86 machine, there are 128-bit registers for SSE2+. I've never tried to write assembly for anything other than basic x86, but I suspect there are some guides out there.

## @kquinn 2010-04-02 20:34:36

The

`xmm`

registers are not useful for this type of operation, as they aren't true 128-bit GPRs; they're a bunch of smaller registers packed together for vectorized operations.## @Ben Collins 2010-04-07 12:27:04

there are 128-bit integer instructions in SSE2. as far as I can tell from the reference manuals, there's no reason they wouldn't be useful for this. There's a multiply, add/subtract, and shift.

## @user200783 2010-04-10 10:41:42

@Ben: In my (brief) look through the Intel manuals I was unable to find a 128-bit integer addition instruction. Do you know what this instruction is called?

## @Ben Collins 2010-04-10 14:43:51

@Paul: PMULUDQ, PADDQ, PSUBQ, PSLLDQ, PSRLDQ. There's an overview listing on v 1, p. 5-25 of the software developer's manual.

## @user200783 2010-04-10 15:18:49

I have looked at those instructions in volume 2 of the Software Developer's Manual and it seems to me that only PSLLDQ and PSRLDQ treat an xmm register as a 128-bit integer. PADDQ and PSUBQ, by contrast, seem to treat an xmm register as "packed quadwords" (i.e. a pair of 64-bit integers). Is this not correct?

## @phuclv 2020-02-12 03:31:30

SIMD registers are for operating on multiple values at once. You can't use it as a single 128-bit value

## @Ahmet Altun 2010-04-02 11:04:47

Since there is no predefined 128-bit integer type in C, bits of A have to be represented in an array. Although B (64-bit integer) can be stored in an

unsigned long long intvariable, it is needed to put bits of B into another array in order to work on A and B efficiently.After that, B is incremented as Bx2, Bx3, Bx4, ... until it is the greatest B less than A. And then (A-B) can be calculated, using some subtraction knowledge for base 2.

Is this the kind of solution that you are looking for?

## @Avi 2010-04-02 11:25:40

That doesn't sound very efficient. It has the potential of taking O(2^128), if B is small and A is large.

## @Ahmet Altun 2010-04-02 11:35:00

The complexity of algorithm can be reduced by incrementing B using left shifting of bytes. It means multiplication by 2 each time. When the B is greater than A, starting from the previous value of B, B can be incremented by initial value of B each time and so on...