Especially with 64bit operators the CG sometimes generates:
and r0, r1, #0
Which just clears r0 and is equivalent with
mov r0, #0
git-svn-id: trunk@22032 -
In certain cases the CG would emit something like
bic r1, r0, #0
As BIC is clearing the specified bits this is equivalent to
mov r1, r0
This patch changes the CG to emit the mov instead which the register
allocator will hopefully remove most of the time.
git-svn-id: trunk@22024 -
Currently the register spiller can not handle the "bond" between IT* and
a following instruction, sometimes breaking them apart, which breaks the
build or worse the result.
So for now we're not emitting A_IT* in second_cmp64bit anymore but use a
conditional jump instead.
This fixes Mantis #22520
git-svn-id: trunk@22009 -
In r21686 I've introduced optimized 64bit shifts for ARM. But the
methods did not check for which machine it has to generate the code.
This patch disables the optimized code for now if the target is in
cpu_thumb2 and falls back to the generic code.
There are 2 problems with the current code:
1.) Thumb-2 does not support shift by register on all data instruction
as ARM does.
2.) The code does not generate the required IT-block for the conditional
executed code.
git-svn-id: trunk@21997 -
The old code generated a strange IT-sequence:
IT EQ
MOVEQ r0, #1
IT NE
MOVNE r0, #1
Now we generate:
ITE EQ
MOVEQ r0, #1
MOVNE r0, #1
IT stands for IfThen, ITE for IfThenElse it has a couple of other forms
where the instruction gets extended to handle more of the following
instructions. So we have ITEE, ITETE etc, up to 4 instructions can be
handled.
git-svn-id: trunk@21996 -
r21885 added a new peephole optimizer. The associated code refactoring
missed a check for
tai(hp1).typ = tai_instruction
Which can lead to an access violation later on, because the rest of the
code expects to find a taicpu in hp1.
git-svn-id: trunk@21949 -
order to minimise memory losses due to alignment padding. Not yet enabled
by default at any optimization level, but can be (de)activated separately
via -Oo(no)orderfields
o added separate tdef.structalignment method that returns the alignment
of a type when it appears in a record/object/class (factors out
AIX-specific double alignment in structs)
o changed the handling of the offset of a delegate interface
implemented via a field, by taking the field offset on demand
rather than at declaration time (because the ordering optimization
causes the offsets of fields to be unknown until the entire
declaration has been parsed)
git-svn-id: trunk@21947 -
Like MOV these instructions support 2 operands, with the second beeing a
shifterop.
Without this patch the asm reader would fail on something like
cmp r0, r1, lsr 16
with
Error: Unknown identifier "LSR"
git-svn-id: trunk@21911 -
ARM can not reference an arbitrary offset so it needs some special
handling if the offset goes beyond abs(4095).
The code for do_spill_read and do_spill written used to be very similar.
I've partially factored out the code into spilling_create_load_store.
The former code loaded the offset from a constant pool, which is a waste
of memory-bandwidth and cache lines. The new code tries to find a way to
adjust the baseregister so the memory location can be reached more
easily, this allows us to handle at least +-1MB with just a single
additional ADD or SUB instruction. If that fails we'll resort to the normal
constant loading code, which on it's own will fallback to loading the
constant from a constant-pool.
So instead of:
ldr r1, =16388
ldr r0, [r13, r1]
which will at least uses 4 cycles (2 Instruction cycles + 2 stall
cycles) on most cores.
We try to generate:
add r1, r13, #16384
ldr r0, [r1, #4]
which most armv5+ cores will execute in 2 cycles. We'll also save on
DCache usage.
git-svn-id: trunk@21889 -
If the needed adjustment is not expressible in a shifterconst, the old code
loaded a temporary register (fixed to r12) via a_load_const_reg and used it
to adjust the SP. Resulting in:
mov r12, #44
orr r12, r12, #4096
sub sp, sp, r12
The new code will try to split the adjustment into 2 shifterconstants and
will do two seperate adjustments:
sub sp, sp, #44
sub sp, sp, #4096
If that doesn't work we'll fall back to the old code. But that should
happen VERY rarely, only for stacks bigger than 256k which are not
expressible in 2 shifter constants.
git-svn-id: trunk@21863 -
This removes the duplications in a_op_reg_reg_reg_checkoverflow.
OP_ROL stays seperate because it needs some special treatment again.
The code for OP_ROL was changed, previously it generated:
mov tempreg, #32
sub src1, tempreg, src1
mov dst, src2, ror src1
This would trash src1, which MIGHT be a problem, but i'm not totally
sure. But the mov/sub was replaced with rsb, so the new code looks like
this.
rsb tempreg, src1, #32
mov dst, src2, ror tempreg
If src1 gets freed afterwards the regallocator should be able to change
that into:
rsb src1, src1, #32
mov dst, src2, ror src1
git-svn-id: trunk@21804 -
The previous code was full with duplicated code, this new version just
maps the OP_* to the correct SM_* and does some special handling for
OP_ROL which is done via OP_ROR.
git-svn-id: trunk@21801 -
This fixes 64bit shifts on arm with a constant shift value of 0.
The old code would have emitted something like this
mov r0, r0, lsl #32
as 32 is an invalid shift value (and would be wrong anyway) the
assembler declined to assemble the produced source.
The new code will just not emit any code for a shift value of 0.
tests/test/tint642.pp now tests shl/shr 0 on 64 bit values.
tests/webtbs/tw22326.pp is also added as an additional test.
git-svn-id: trunk@21746 -
getintparaloc + adapted all call sites of getintparaloc. This
led to a number of additional, related changes:
o corrected the type information for some getintparaloc parameters
o don't allocate some intparalocs in cases they aren't used
o changed "const tvardata" parameter into "constref tvardata" for
fpc_variant_copy_overwrite to make pass-by-reference semantics
explicit
o moved a number of routines that now have to call find_system_type()
from cgobj to hlcgobj so that cgobj doesn't have to start depending
on the symtable unit
o added versions of the cpureg alloc/dealloc methods to hlcgobj that
call through to their cgobj counter parts, so we can call save/restore
the cpu registers before/after calling system helpers from hlcgobj
(not implemented in hlcgobj itself, because all basic register
allocator functionality is still part of cgobj/cgcpu)
git-svn-id: trunk@21696 -
* set tcgpara.def for the function return location (field introduced for and
already used by the JVM code generator, required for future hlcg
functionality)
git-svn-id: trunk@21691 -
This code generate different versions of assembly depending on the
amount to shift.
Variable Amount: 6 cycles (5 if last shift can be folded)
Constant 1 : 2 cycles
Constant 2-31 : 3 cycles (2 if last shift can foldable)
Constant 32 : 1 cycle (depends on the register allocator)
Constant 33-64 : 2 cycles
This should speed up softfpu on arm a bit.
git-svn-id: trunk@21686 -
RRX (Rotate Right with eXtend) does a single bit right rotation through
the carry. So it does not take any arguments, neither constant nor
register.
Also remove redundant shiftmode2str and replace usage of it with gas_shiftmode2str.
git-svn-id: trunk@21685 -
This code will generate the following sequence on arm:
r1=dst
r0=src
movs r1, r0
rsbmi r1, r0, #0
movs will set the N-flag when the MSB of r0 is set, if it is set, rsb
will calculate dst:=0-src;
git-svn-id: trunk@21678 -
The old version did not check the S-Postfix for MOV, which results in
removing instructions like:
movs r0, r0
which breaks later flag usage.
git-svn-id: trunk@21676 -
This now generates:
mvn r0, r0, lsl #24/#16
mov r0, r0, lsr/asr #24/#16
The lsr/asr might be folded into a following instruction, making the
whole operation 1 cycle instead of 2-3 with the previous solution.
git-svn-id: trunk@21658 -
OP_ADD, OP_SUB, OP_ORR will be split into two intructions if possible when a load/const
construction is required.
OP_AND is a bit different, because we can't just split it up, but we try
to find a two instruction BIC-equivalent to it.
Till now code like
a:= a and $FFFF;
produced code like
mov r0, $FF00
orr r0, r0, $FF
and r1, r1, r0
With this addition we produce code like:
bic r0, r0, $FF00
bic r0, r0, $FF
Saving us at least a cycle and in some cases also a load from the
constant-pool.
This uses the new split_into_shifter_const function.
git-svn-id: trunk@21647 -
* use split_into_shifter_const to reduce the MOV/ORR combination to a
single check and allow a broader rang of combinations.
* Introduce MVN/BIC combination to load values which have more 1 than 0
bits set (like small negative values)
git-svn-id: trunk@21646 -
This functions tries to split up a 32-bit value into two shifter
constants. This approach finds a broader range for two shifter constant
combinations.
git-svn-id: trunk@21645 -
cgutils, and define them so they are no larger than what is required by
the current target platform
* added cgutils to the uses clause of several units that use the
tcpuregisterset type
git-svn-id: trunk@21624 -
RS_INVALID superregister (instead of sometimes RS_NO and sometimes
RS_INVALID)
* check for RS_INVALID in tcg.g_save_registers() and ignore such entries
git-svn-id: trunk@21622 -