* disable fix for #33439 during bootstrapping with 3.0.x, as 3.0.x cannot compile the currency division without the fix above
git-svn-id: trunk@38558 -
* compinnr.inc include file converted to a unit
* inline number field size stored in ppu increased from byte to longint
* inlines in the parse tree (when written with the -vp option) now printed with
their enum name, instead of number
git-svn-id: trunk@36174 -
Switched codegeneration of VFPv2 and VFPv3 to use UAL mnemonics and syntax.
Updated VFP code in RTL to use UAL syntax too.
Added preliminary ELF support for ARM.
Added support for linking of WinCE COFF files. Should work for with a standard ARMv4-I target.
git-svn-id: branches/laksen/armiw@29247 -
Background: these operations are defined as flipping or clearing the upper bit of number, respectively, and never result in precision loss or raise floating-point exceptions.
git-svn-id: trunk@27411 -
Fixed a bug where ARMv7M targets would not use the DIV instructions.
Moved many size-optimizing Thumb2 peephole optimizations to PostPeepHoleOptsCpu. Previously those optimizations could make it impossible to reuse the shared arm peephole optimizations.
Reenabled a fixed MLA/MLS peephole optimization.
Refactored some FindRegDealloc+regLoadedWithNewValue into RegEndOfLife calls.
Fixed some broken UXTB/UXTH optimizations. Previously they would also match UXT* instructions with ROR shifter ops.
git-svn-id: trunk@26198 -
In r21686 I've introduced optimized 64bit shifts for ARM. But the
methods did not check for which machine it has to generate the code.
This patch disables the optimized code for now if the target is in
cpu_thumb2 and falls back to the generic code.
There are 2 problems with the current code:
1.) Thumb-2 does not support shift by register on all data instruction
as ARM does.
2.) The code does not generate the required IT-block for the conditional
executed code.
git-svn-id: trunk@21997 -
This fixes 64bit shifts on arm with a constant shift value of 0.
The old code would have emitted something like this
mov r0, r0, lsl #32
as 32 is an invalid shift value (and would be wrong anyway) the
assembler declined to assemble the produced source.
The new code will just not emit any code for a shift value of 0.
tests/test/tint642.pp now tests shl/shr 0 on 64 bit values.
tests/webtbs/tw22326.pp is also added as an additional test.
git-svn-id: trunk@21746 -
This code generate different versions of assembly depending on the
amount to shift.
Variable Amount: 6 cycles (5 if last shift can be folded)
Constant 1 : 2 cycles
Constant 2-31 : 3 cycles (2 if last shift can foldable)
Constant 32 : 1 cycle (depends on the register allocator)
Constant 33-64 : 2 cycles
This should speed up softfpu on arm a bit.
git-svn-id: trunk@21686 -
future use by high level code generator targets
o this in turn required that all a_load*_loc* methods are called via
hlcg rather than via cg, since a location can be a subsetref/reg and
and those are no longer handled in tcg
o that then required moving several force_location_* routines into
thlcg because they use a_load_loc*, but did not take tdef size
parameters (which are required by the thlcg a_load_loc* routines)
o the only practical consequence is that from now on, you have to
use hlcg.location_force_mem/reg() (fpureg not yet) and
hlcg.gen_load_loc_cgpara() instead of the removed versions from ncgutil,
and hlcg.a_load*loc*() instead of cg.a_load*loc* if a subsetref/reg
might be involved
git-svn-id: trunk@21287 -
o new eabihf (hard float) abi
o vfpv3_d16 variant of VFP (default variant used by EABI assemblers: VFPv3
with only 16 double registers instead of 32) and pass it to GNU as
o make the odd numbered single precision floating point VFP registers
available for explicit allocation for use by the calling convention
* fixed copy/paste error in stdname of S30 register
-> use -dFPC_ARMHF to create an ARM eabi hard float compiler
(mantis #21554)
git-svn-id: trunk@20660 -
+ RTL support:
o VFP exceptions are disabled by default on Darwin,
because they cause kernel panics on iPhoneOS 2.2.1 at least
o all denormals are truncated to 0 on Darwin, because disabling
that also causes kernel panics on iPhoneOS 2.2.1 (probably
because otherwise denormals can also cause exceptions)
* set softfloat rounding mode correctly for non-wince/darwin/vfp
targets
+ compiler support: only half the number of single precision
registers is available due to limitations of the register
allocator
+ added a number of comments about why the stackframe on ARM is
set up the way it is by the compiler
+ added regtype and subregtype info to regsets, because they're
also used for VFP registers (+ support in assembler reader)
+ various generic support routines for dealing with floating point
values located in integer registers that have to be transferred to
mm registers (needed for VFP)
* renamed use_sse() to use_vectorfpu() and also use it for
ARM/vfp support
o only superficially tested for Linux (compiler compiled with -Cpvfpv6
-Cfvfpv2 works on a Cortex-A8, no testsuite run performed -- at least
the fpu exception handler still needs to be implemented), Darwin has
been tested more thoroughly
+ added ARMv6 cpu type and made it default for Darwin/ARM
+ ARMv6+ implementations of atomic operations using ldrex/strex
* don't use r9 on Darwin/ARM, as it's reserved under certain
circumstances (don't know yet which ones)
* changed C-test object files for ARM/Darwin to ARMv6 versions
* check in assembler reader that regsets are not empty, because
instructions with a regset operand have undefined behaviour in that
case
* fixed resultdef of tarmtypeconvnode.first_int_to_real in case of
int64->single type conversion
* fixed constant pool locations in case 64 bit constants are generated,
and/or when vfp instructions with limited reach are present
WARNING: when using VFP on an ARMv6 or later cpu, you *must* compile all
code with -Cparmv6 (or higher), or you will get crashes. The reason is
that storing/restoring multiple VFP registers must happen using
different instructions on pre/post-ARMv6.
git-svn-id: trunk@14317 -