o separate information for reading and writing, because e.g. in a
try-block, only the writes to local variables and parameters are
volatile (they have to be committed immediately in case the next
instruction causes an exception)
o for now, only references to absolute memory addresses are marked
as volatile
o the volatily information is (should be) properly maintained throughout
all code generators for all archictures with this patch
o no optimizers or other compiler infrastructure uses the volatility
information yet
o this functionality is not (yet) exposed at the language level, it
is only for internal code generator use right now
git-svn-id: trunk@34996 -
labels of LOC_JUMP in the node's location. This generates some extra jumps
for short circuit boolean and/or-expressions if optimizations are off, but
with optimisations enabled the generated code is the same (except for JVM
because the jump threading optimisation isn't enabled there yet).
git-svn-id: trunk@31431 -
to free up space for more ait_* types in taitype (can't have
more than 32 because they have to fit in a small set)
o factored out writing of floating point numbers as an array of
byte in the external assemblers
git-svn-id: branches/hlcgllvm@28105 -
* Use fpc_[int64|qword]_to_double instead of [int64|qword]_to_float64, makes RTL no longer dependent on softfloat code.
* Move 32-bit values from integer registers to FPU registers without using memory.
* Fixed branching, was still using a macro and delay slot was missing.
git-svn-id: trunk@25071 -
* Reused applicable code from the above mentioned method in tMIPSELnotnode.second_boolean, it is more efficient in handling 64-bit data.
git-svn-id: trunk@23531 -
21558:
use inherited first_int_to_real to avoid mixing doubles and singles
it fixes the failure of test/cg/taddcurr.pp
21559:
set default round mode to round nearest instead of round to zero
it fix test/cg/taddcurr.pp
21560:
enable softfpu, default first_int_to_real depends on int64_to_float64/32 etc.
It is needed by the patch of r21558
git-svn-id: trunk@21601 -
future use by high level code generator targets
o this in turn required that all a_load*_loc* methods are called via
hlcg rather than via cg, since a location can be a subsetref/reg and
and those are no longer handled in tcg
o that then required moving several force_location_* routines into
thlcg because they use a_load_loc*, but did not take tdef size
parameters (which are required by the thlcg a_load_loc* routines)
o the only practical consequence is that from now on, you have to
use hlcg.location_force_mem/reg() (fpureg not yet) and
hlcg.gen_load_loc_cgpara() instead of the removed versions from ncgutil,
and hlcg.a_load*loc*() instead of cg.a_load*loc* if a subsetref/reg
might be involved
git-svn-id: trunk@21287 -