Utilisateur:Grophi/Brouillon

In compiler optimization, register allocation is the process of assigning a large number of target program variables onto a small number of CPU registers.

Register allocation can happen over a basic block (local register allocation), over a whole function/procedure (global register allocation), or across function boundaries traversed via call-graph (interprocedural register allocation). When done per function/procedure the calling convention may require insertion of save/restore around each call-site.

Context

modifier

Allocation is performed on intermdiate representation on code (source?) Indicate when allocation optimization is performed in the compilation chain a priori, at the end of the compilation chain (not on source code)

Translation of programming languages

modifier

In computer software, a program written in source code (e.g Java, Swift, or Assembly) needs to be executed. This may requires the translation of the source code into some kind of intermediate representation, which will then be executed. There are two main strategies to execute code: interpretation and compilation. Machine code can as well be translated into machine code ==> Binary recompilation. Not sure we should mention this, but if we do, we should rephrase this sentence accordingly

These two strategies shares the same two first stages:

  • The first stage is called preprocessing. In this stage, all non-meaningful information for the computer (e.g comments) are removed (more details here).
  • The second to fourth stages are dedicated to lexical, syntax and semantic analysis: the preprocessed code is being analyzed to check if it complies with the grammar language, and may be turned into a parse tree, an abstract syntax tree or other kind(s) of intermediate representation.

The interpretation strategy does not require the code to be translated in machine code: it performs the instructions that have been generated in the previous stages. The compilation strategy, however, needs to translate the intermediate representation into machine language instructions that can be executed directly by a computer's central processing unit (CPU). To do so, it generally first translate the IR into assembly instructions specific to the target processor architecture, and then an assembler is used to translate the assembly instructions to object code. The output consists of actual instructions to be run by the target processor. [1]


ADD FIGURE (can be something showing the translation of source code to assembly code)

Register allocation

modifier

In many programming languages, the programmer may use any number of variables. The computer can quickly read and write registers in the CPU, so the computer program runs faster when more variables can be in the CPU's registers. Also, sometimes code accessing registers is more compact, so the code is smaller, and can be fetched faster if it uses registers rather than memory. ==> ditch or source However, the number of registers is limited in most CPUs. Therefore, when the compiler is translating code to machine-language, it must decide how to allocate variables to the limited number of registers in the CPU. [2],[3]


ADD FIGURE (different number of registers in the most common architectures)


Not all variables are in use (or "live") at the same time, so, over the lifetime of a program, a given register may be used to hold different variables. However, two variables in use at the same time cannot be assigned to the same register without corrupting one of the variables. If there are not enough registers to hold all the variables, some variables may be moved to and from RAM. This process is called "spilling" the registers. Over the lifetime of a program, a variable can be spilled or stored in registers, this variable is "split". The variables can also be spilled and .Accessing RAM is significantly slower than accessing registers add ref and so a compiled program runs slower. Therefore, an optimizing compiler aims to assign as many variables to registers as possible. A high "Register pressure" is a technical term that means that more spills and reloads are needed.

In addition, some computer designs cache frequently-accessed registers. So, programs can be further optimized by assigning the same register to a source and destination of a move instruction whenever possible. This is especially important if the compiler is using an intermediate representation such as static single-assignment form (SSA). In particular, when SSA is not fully optimized it can artificially generate additional move instructions.


Register allocation is usually divided into sucessive phases: move insertion, spill and assignment coalescing:

The move insertion phase

This phase consists in inserting move instructions, splitting the live ranges of variables, with the goal of achieving a net improvement in code quality by improving the results of the other components. [4]

The spill phase

This phase consists in storing the variable outside registers while attempting to keep an efficient memory access [5]

The assignment coalescing phase

This phase consists in mapping the not-yet spilled variables to physical registers, and aims at reduce the amount of move instructions among regsiters.[6]


“The coalescing component of register allocation attempts to eliminate existing move instructions by allocating the operands of the move instruction to identical locations. The spilling component attempts to minimize the impact of accessing variables in memory.

The complexity of register allocation for a fixed schedule comes from two main optimizations, spilling and coalescing . Spilling decides which variables should be stored in memory to make possible register assignment (the mapping of other variables to registers) while minimizing the overhead of stores and loads. Register coalescing aims at minimizing the overhead of moves between registers. [7]

Usually, register allocation is performed when the code is compiled (and not interpreted) say why and cite, as this will have an impact on the generated assembly. To ease this step and help with optimization, the code is usually translated into some kind of intermediate representation (bytecode, AST, SSA representation).

Have to say that in this phase, we have control over the generated machine code (whether us or the tool (compiler) that we use). The computation and optimization of register allocation is then performed on this intermediate representation.

insert somewhere

The Static Single Assignment (SSA) form is another intermediate representation. It is widely used in the context of register allocation due to its properties of XXXX, XXXX and XXXX. ‘’’necessay to have defined what is the aim of register allocation...

"The static single assignment (SSA) form is an intermediate representation with very interesting properties. A code is in SSA form when every scalar variable has only one textual definition in the program code. Most compilers use a particular SSA form, the strict SSA form, with the additional so-called dominance property : given a use of a variable, the definition occurs before any uses on any path going from the beginning of the program (the root) to a use. One of the useful properties of such a form is that the dominance graph is a tree and the live ranges of the variables (delimited by the definition and the uses of a variable) can be viewed as subtrees of this dominance tree." [8]

Common problems raised in register allocation

modifier

"A variable is live if it holds a value which may be used later in the program. Two variable which are live simultaneously are said to interfere, since they can not use the same register resources." [9]

Aliasing (affect register allocation / pressure)
Pre-coloring (affect register allocation / pressure)
Live range splitting (affect register allocation / pressure)

the spill problem ? Unfortunately, our work shows that SSA does not simplify the spill problem like it does for the assignment (coloring) problem. Still, our results can provide insights for the design of aggressive register allocators that trade compile time for provably “optimal” results[10]

NP-Problem


Register allocation techniques

modifier

Global vs. local register allocation approaches

modifier

“register allocation needs to make a trade-off between the time spent on finding a solution and the resulting code quality. One of these trade-offs is whether to perform register allocation locally, i.e. on the scope of a basic block, or globally by looking at the whole compilation unit, i.e., a method.” [11]


“given a sequence of instructions (basic block) and a number of general purpose registers, and the schedule of variables in registers that minimizes the total trac between CPU and the memory system.”[12]

“Local register allocation assigns registers to variables in basic blocks, which are maximal branch-free sequences of instructions. Global register allocation assigns registers to variables throughout the program. The local register allocation problem is general enough to model online paging with write-backs and weighted caching. An optimum local allocation schedules the loading of values from memory into registers and the storing from registers into memory. The main difficulty of local register allocation stems from the trade-o between the cost of loads and the cost of stores.”[13]

“To the best of our knowledge, local register allocation was first considered formally in a 1966 paper by Horwitz, Karp, Miller, and Winograd [HKMW66]. In that paper, an algorithm was presented to produce an optimal allocation through dynamic programming”[14]


LOCAL A procedure for inde× register allocation is described. The rules of this procedure are shown to yield an optimal allocation for "straight line" programs. [15]

Graph-coloring allocation (global)

modifier

Graph-coloring allocation is the predominant approach to solve register allocation. It was first proposed by Chaitin and al., [16] In this approach, nodes in the graph represent live ranges (variables, temporaries, virtual/symbolic registers) that are candidates for register allocation. Edges connect live ranges that interfere , i.e., live ranges that are simultaneously live at at least one program point. Register allocation then reduces to the graph coloring problem in which colors (registers) are assigned to the nodes such that two nodes connected by an edge do not receive the same color. [17]


ADD FIGURE/EXAMPLE


Using liveness analysis, an interference graph can be built. The interference graph which is an undirected graph where the nodes are the program' variables is used to models which variables cannot be allocated to the same register. [18]

The three drawbacks of Graph-coloring allocation, is it relies on graph-coloring which is an NP-complete problem to decide which variables are spilled. Which makes graph-coloring NP-completeness is the fact that finding a minimal coloring graph.[19] Second, unless live-range splitting is used, evicted variables are spilled everywhere: store (resp. load) instructions are inserted as early (resp. late) as possible, i.e., just after (resp. before) variable definitions (resp. uses). Third, a variable that is not spilled is kept in the same register throughout its whole lifetime. “ [20]

Also, graph coloring is an aggressive technique for allocating registers, but is computationally expensive due to its use of the interference graph, which can have a worst-case size that is quadratic in the number of live ranges[21]

The traditional formulation of graph-coloring register allocation implicitly assumes a single bank of non-overlapping general-purpose registers and does not handle irregular architectural features like overlapping registers pairs, special purpose registers and multiple register banks [22]

The main phases in a Chaitin-style graph-coloring register allocator: [23]

  1. Renumber: discover live range information in the source program.[24]
  2. Build: build the interference graph.
  3. Coalesce: merge the live ranges of non-interfering variables related by copy instructions.
  4. Spill cost: compute the spill cost of each variable. That is a measure of the impact of mapping a variable to memory on the speed of the final program.
  5. Simplify: Kempe’s coloring method.
  6. Spill Code: insert spill instructions, i.e loads and stores to commute values between registers and memory.
  7. Select: assign a register to each variable.


ADD FIGURE

One later improvement found by Briggs et al. Chaitin-style graph-coloring is conservative coalescing. This improvement add a criteria to decide when two live range can be merged. Mainly, in addition of the non-interfering requirement, two variable can only be coalesced if their merging will not cause further spilling. Briggs et al. introduces a second improvement to Chaitin's works which is biased coloring. Biased coloring tries to assign the same color in the graph-coloring to live range that are copy related.

Coalescing

modifier

In the context of register allocation, coalescing is the act of mapping two non-interfering variables that are related by a copy instruction to the same register. For example, if two variables v1 and v2 do not interfere, and they are related by a copy instruction, that is, the source program contains an instruction such as v1 = v2, then it is desirable that these variables be allocated into the same register R. In this case, we will have the copy instruction R = R, which is redundant and can safely be removed from the target program.[25]

Aggressive Coalescing

modifier

Conservative Coalescing

modifier

Optimistic Coalescing

modifier

Incremental Conservative Coalescing

modifier

Linear Scan (global)

modifier

Greedy algorithm

modifier

Linear scan in JIT

modifier

“Linear Scan is a global register allocation approach meaning that the algorithm works on the whole compilation unit. The idea is to bring basic blocks of the control-flow graph into a linear order. The liveness of a variable is expressed as an interval along this linear order. The intervals are then traversed based on their start positions, from earliest to latest”
“(where is it used) Linear Scan is used in numerous dynamic compilers including the HotSpot client compiler, the Jikes RVM, Google’s JIT compiler for JavaScript (V8), and initially also in LLVM.”, [26] “(problem) liveness information cannot be expressed as a continuous range on a linear list of blocks”
[27] “(problem) lifetime holes, i.e. ranges where the value of a variable is not needed. This information allows these approaches to find better allocations compared to the original Linear Scan. However, lifetime holes introduce a more complex representation which leads to an allocation algorithm which is no longer linear”
[28]


“This algorithm is not based on graph coloring, but allocates registers to variables in a single linear-time scan of the variables' live ranges. The linear scan algorithm is considerably faster than algorithms based on graph coloring” [29]

"Given live variable information (obtained, for example, via data-flow analysis [Aho et al. 1986]), live intervals can be computed easily with one pass through the intermediate representation. Interference among live intervals is captured by whether or not they overlap. Given R available registers and a list of live intervals, the linear scan algorithm must allocate registers to as many intervals as possible, but such that no two overlapping live intervals are allocated to the same register. If n>R live intervals overlap at any point, then at least n − R of them must reside in memory” [30]

“Linear scan (LS), on the other hand, does not build an interference graph, but instead allocates registers to variables in a greedy fashion by scanning all the live ranges in a single pass. It is simple, efficient, and produces a relative good packing of all the variables of a method into the available physical registers”[31]

Second-chance binpacking [Traub et al. 1998].

modifier

“Traub et al. have proposed a more complex linear scan algorithm, which they call second-chance binpacking [Traub et al. 1998]. This algorithm is an evolution and re nement of binpacking , a technique used for several years in the DEC GEM optimizing compiler [Blickstein et al. 1992]. At a high level, the binpacking schemes are similar to linear scan, but they invest more time in compilation in an attempt to generate better code. The second-chance binpacking algorithm both makes allocation decisions and rewrites code in one pass. The algorithm allows a variable's lifetime to be split multiple times, so that the variable resides in a register in some parts of the program and in memory in other parts. “ [32]

for each instruction in linear order

for each temporary r

if (i currently in register r)

rewrite reference

else

if (E r with large enough hole)

assigne t to r

else spill lowest cost candidate

end for

for each edge in control flow graph

resoleve conficting location assumptions

end for

Trace allocation (+ hierarchical graph coloring ?)

modifier

“A global approach such as graph coloring [Chaitin et al., 1981; Briggs et al., 1989; George and Appel, 1996] does not provide this flexibility. Optimizations focus here on the heuristics to improve code quality. For just-in-time compilation these approaches are often too costly.” [33]


“trace register allocation as a non-global alternative. The idea is to divide a control-ow graph into linear code segments, so-called traces , and to solve the register allocation problem for those independently. Later, the intermediate results are merge to build a solution for the whole compilation unit. “ [34]

“Instead of processing a whole method at once, the basic blocks of the control flow graph are partitioned into traces, i.e., linear sub-graphs of sequentially executed blocks. For each trace, register allocation is performed without interaction with other parts of the compilation unit. This simplifies the problem of register allocation since control flow can be ignored.” [35]

Spilling

modifier

“With the exception of short-lived temporaries, most temporaries must spill – including long lived temporaries that are used within inner loops. Live-range splitting before or during register allocation helps to alleviate the problem but prior techniques are sometimes complex, make no guarantees about subsequent colorability and thus require further iterations of splitting, pay no attention to addressing modes, and make no claim to optimality.”[36]

Rematerlization

modifier

The problem of optimal register allocation is NP-complete, compilers employ heuristic techniques to approximate its solution.

Chaitin et al. discuss several ideas for improving the quality of spill code. They point out that certain values can be recomputed in a single instruction and that required operand will always be available for the computation. They call these exceptional values never-killed and note that such values should be recalculated instead of being spilled and reloaded. They further note that an uncoalesced copy of a never-killed value can be eliminated by recomputing it directly into the desired register.[37]

These techniques are termed rematerialization. In practice, opportunites for rematerialization include:

  • immediate loads of integer constants and, on some machines, floating-point constants,
  • computing a constant offset from the frame pointer or the static data area, and
  • loading non-local frame pointers from a display.[37]

Briggs and Al extends Chaitin's work to take advantage of rematerialization opportunities for complex, multi-valued live ranges. They found if each values are tagged with enough information to allow the allocator to handle it correctly. Briggs's approach is the following: split each live range into its component values, then propagate rematerialization tags to each value, and from new live ranges from connected values having identical tags.[37]

Mixed approaches

modifier

Hybrid allocation (mix Linear scan and graph coloring)

modifier

“Cavazos et al. [2006] proposed a hybrid optimization mechanism to switch between a graph coloring and a linear scan allocator in the Jikes RVM. They use an offline machine learning algorithm to find a decision heuristic. The induced heuristic reduces the total time (compile time plus benchmark execution time) by 9% on average over graph coloring for a selected set of benchmarks from the SPECjvm98 suite. To classify a method, they use properties which are similar to those we are using. However, we can change the allocation algorithm for each trace even within a method. This allows more fine-grained control over the compile-time vs. peak-performance trade-off.” [38]

“Hybrid optimizations choose dynamically at compile time which optimization algorithm to apply from a set of different algorithms that implement the same optimization. They use a heuristic to predict the most appropriate algorithm for each piece of code being optimized. Specifically, we construct a hybrid register allocator that chooses between linear scan and graph coloring register allocation. Linear scan is more efficient, but sometimes less effective; graph coloring is generally more expensive, but sometimes more effective”[39]

“A hybrid optimization uses a heuristic to choose the best of these algorithms to apply in a given situation. Here we construct a hybrid register allocator that chooses between two different register allocation algorithms, graph coloring and linear scan. The goal is to create an allocator that achieves a good balance between two factors: trying to find a good packing of the variables to registers (and thereby achieving good running time performance) and trying to reduce the overhead of the allocator.”[40]

Split allocation (static/offline then dynamic/online)

modifier

“split register allocator can be very aggressive in its offline stage, producing a semantic summary through bytecode annotations that can be processed by a lightweight online stage. The challenges are fourfold: (sub-)linear-size annotation, linear-time online processing, and minimal loss of code quality, portability of the annotation.” [41]

“Split compilation has the potential to combine the advantages of offline and online compilation: running expensive analyses offline to prune the optimization space, defering a more educated optimization decision to the online stage, when the precise execution context is know” [42]

“The online stage performs allocation based on a compact spill set collected by the offline stage, and carried as bytecode annotations” [43]

“The split compilation term was first coined in the context of JIT vectorization [12]. Split register allocation improves on Jones and Kamin’s annotation-driven approach by leveraging the decoupled allocation (spilling) and assignment (col- oring) phases of register allocation.” [44]

For deferred compilation to be effective, high-level information must be propagated while lowering the program representation: it can take the form of annotations in Java class file for register allocation [2, 18, 27], array bound checks removal [38] or side effect analysis [29, 32]. Split compilation generalizes these approaches: it uses annotations and coding conventions in the intermediate language to coordinate the optimization process over the entire lifetime of the program [45]

Split compilation can also enhance classical optimizations, to speed them up [29] or improve their effectiveness [18]. Diouf et al. [18] revisit register allocation, splitting the optimization into coordinated allocation and assignment heuristics. This split leverages fundamental advances in register allocation [7]. Compact, portable annotations drive a linear-time online algorithm, generating code of comparable quality with an optimal offline allocation, and saving up to 40%of the spills on standard Java benchmarks. [46]

Recent results on the SSA form open promising directions for the design of new register allocation heuristics for embedded systems and especially for embedded compilation. In particular, heuristics based on tree scan with two separated phases — one for spilling, then one for coloring/coalescing — seem good candidates for designing memory-friendly, fast, and competitive register allocators. [47]

NO-NAME

modifier

Integer Linear Programming

modifier

“his paper presents a family of new register allocation algorithms that are suitable for off-line computation of high-quality register allocations. The algorithm is targeted for embedded systems which need to run medium-sized applications with limited resources.”[48]

“The approach presented in this paper makes a radical departure from the graph coloring model, completely eliminating the boolean decision of spilling or not spilling a temporary. The basic idea is to allow temporaries to switch registers at any time and to use constraints to force temporaries that are used at a given instruction into appropriate registers only at the time of use”[49]

“The proposed approach uses Integer Linear Programming (ILP) [13]. Using minor variations to the integer linear program the model is able to encompass features from a large body of previous work on register allocation, including bit-wise allocation, coalescing, spilling, use of registers for both spilling and ordinary temporaries, and a limited form of rematerialization [3].“[50]

Partitioned Boolean Quadratic Programming

modifier

Multi-Flow of Commodities (local)

modifier

Multi-Flow of Commodities (MFC) problem was introduced by Koes and Goldstein. They propose to see register allocation restricted to local basic blocks only.

In the MCF approach, a program is a collection of physical locations, registers or memory called pipes, which the allocator must pass a number of indivisible variable called commodities. A flow of a commodity represents the detailed allocation of the variable that the commodity encodes. Koes and Goldstein have optimized their progressive allocator to reduce the size of the target programs, and they have shown that this method is consistently able to produce code of smaller size than a graph coloring based allocator.

Comparison between the different techniques

modifier

How is it possible to compare the results of the different techniques ?

Find potential benchmarks

problem: find common ground for comparison (which ones?)

give VM examples

Bibliography

modifier
  • Josef Eisl, Matthias Grimmer, Doug Simon, Thomas Würthinger et Hanspeter Mössenböck « Trace-based Register Allocation in a JIT Compiler » () (DOI 10.1145/2972206.2972211, lire en ligne)
    « (ibid.) », dans Proceedings of the 13th International Conference on Principles and Practices of Programming on the Java Platform: Virtual Machines, Languages, and Tools, Lugano, Switzerland, ACM, coll. « PPPJ '16 » (ISBN 978-1-4503-4135-6), p. 14:1--14:11
  • Gregory J. Chaitin, Marc A. Auslander, Ashok K. Chandra, John Cocke, Martin E. Hopkins et Peter W. Markstein, « Register Allocation via Coloring », Comput. Lang., Pergamon Press, Inc., vol. 6,‎ , p. 47--57 (ISSN 0096-0551, DOI 10.1016/0096-0551(81)90048-5, lire en ligne)
  • (en) Johan Runeson et Sven-Olof Nyström, Retargetable Graph-Coloring Register Allocation for Irregular Architectures, Springer Berlin Heidelberg, , 240--254 (ISBN 978-3-540-39920-[à vérifier : ISBN invalide], DOI 0.1007/978-3-540-39920-9_1)
  • (en) Alfred V. Aho, Monica S. Lam, Ravi Sethi et Jeffrey D. Ullman, Compilers: Principles, Techniques, and Tools (2Nd Edition), Addison-Wesley Longman Publishing Co., Inc., (ISBN 0321486811)
  • (en) Ronald V. Book, « Richard M. Karp. Reducibility among combinatorial problems. Complexity of computer computations, Proceedings of a Symposium on the Complexity of Computer Computations, held March 20-22, 1972, at the IBM Thomas J. Watson Center, Yorktown Heights, New York, edited by Raymond E. Miller and James W. Thatcher, Plenum Press, New York and London 1972, pp. 85–103. », The Journal of Symbolic Logic, vol. 40, no 4,‎ , p. 618–619 (ISSN 1943-5886 et 0022-4812, DOI 10.2307/2271828, lire en ligne, consulté le )
  • Quentin Colombet, Florian Brandner et Alain Darte « Studying Optimal Spilling in the Light of SSA » () (DOI 10.1145/2038698.2038706, lire en ligne)
    « (ibid.) », dans Proceedings of the 14th International Conference on Compilers, Architectures and Synthesis for Embedded Systems, Taipei, Taiwan, ACM, coll. « CASES '11 » (ISBN 978-1-4503-0713-0), p. 25--34
  • David Ryan Koes et Seth Copen Goldstein « Register Allocation Deconstructed » () (lire en ligne)
    « (ibid.) », dans Proceedings of th 12th International Workshop on Software and Compilers for Embedded Systems, Nice, France, ACM, coll. « SCOPES '09 » (ISBN 978-1-60558-696-0), p. 21--30

References

modifier
  1. Aho 2016, p. 30
  2. Runeson 2003, p. 242
  3. Eisl 2016, p. 14:1
  4. Koes 2009, p. 21
  5. Koes 2009, p. 21
  6. Colombet 2011, p. 26
  7. F. Bouchez et al., 2007, p.1.
  8. F. Bouchez, 2007, p.1.
  9. J. Runeson, S-O Nyström, 2003, p.2.
  10. F. Bouchez, 2007, p.8.
  11. J. Eisl et al., 2017, p.1.
  12. M. Farach and V. Liberatore, 1998, p.2.
  13. M. Farach and V. Liberatore, 1998, p.3.
  14. M. Farach and V. Liberatore, 1998, p.4.
  15. L. Horwitz et al., 1966, p.1.
  16. Chaitlin 1981, p. 47
  17. M. Poletto, V. Sarkar, 1999, p.2.
  18. J. Runeson, S-O Nyström, 2003, p.2.
  19. Book 1972, p. 618-619
  20. Q. Colombet et al., 2011, p.1.
  21. J. Cavazos et al., 2006, p.1.
  22. J. Runeson, S-O Nyström, 2003, p.1.
  23. Lal George et Andrew W. Appel, « Iterated Register Coalescing », ACM Trans. Program. Lang. Syst., vol. 18, no 3,‎ , p. 300–324 (ISSN 0164-0925, DOI 10.1145/229542.229546, lire en ligne, consulté le )
  24. Preston Briggs, Keith D. Cooper et Linda Torczon, « Improvements to Graph Coloring Register Allocation », ACM Trans. Program. Lang. Syst., vol. 16, no 3,‎ , p. 428–455 (ISSN 0164-0925, DOI 10.1145/177492.177575, lire en ligne, consulté le )
  25. Lal George et Andrew W. Appel, « Iterated Register Coalescing », ACM Trans. Program. Lang. Syst., vol. 18, no 3,‎ , p. 300–324 (ISSN 0164-0925, DOI 10.1145/229542.229546, lire en ligne, consulté le )
  26. Eisl 2016, p. 1
  27. J. Eisl et al., 2016, p.1.
  28. J. Eisl et al., 2016, p.2.
  29. M. Poletto, V. Sarkar, 1999, p.1.
  30. M. Poletto, V. Sarkar, 1999, p.4.
  31. J. Cavazos et al., 2006, p.1-2.
  32. M. Poletto, V. Sarkar, 1999, p.3.
  33. J. Eisl et al., 2018, p.11.
  34. J. Eisl et al., 2018, p.1.
  35. J. Eisl et al., 2017, p.1.
  36. A. W. Appel, L. George, 2000, p.1.
  37. a b et c Briggs et al., 1992, p.313.
  38. J. Eisl et al., 2017, p.11.
  39. J. Cavazos et al., 2006, p.1.
  40. J. Cavazos et al., 2006, p.2/125.
  41. B. Diouf, et al., 2010, p.1.
  42. B. Diouf, et al., 2010, p.2.
  43. B. Diouf, et al., 2010, p.7.
  44. B. Diouf, et al., 2010, p.13.
  45. A. Cohen and E. Rohou, 2010, p.4.
  46. A. Cohen and E. Rohou, 2010, p.5.
  47. F. Bouchez et al., 2007, p.1.
  48. R. Barik et al., 2006, p.1.
  49. R. Barik et al., 2006, p.1.
  50. R. Barik et al., 2006, p.2.