Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberCA2660910 A1
Publication typeApplication
Application numberCA 2660910
PCT numberPCT/US2007/075814
Publication date21 Feb 2008
Filing date13 Aug 2007
Priority date15 Aug 2006
Also published asCA2660910C, EP2052501A2, US7529236, US7835349, US8391176, US20070049267, US20090176477, US20110028122, WO2008022076A2, WO2008022076A3, WO2008022076A9
Publication numberCA 2660910, CA 2660910 A1, CA 2660910A1, CA-A1-2660910, CA2660910 A1, CA2660910A1, PCT/2007/75814, PCT/US/2007/075814, PCT/US/2007/75814, PCT/US/7/075814, PCT/US/7/75814, PCT/US2007/075814, PCT/US2007/75814, PCT/US2007075814, PCT/US200775814, PCT/US7/075814, PCT/US7/75814, PCT/US7075814, PCT/US775814
InventorsSekhar Kota, Todd Crick, Michael Dunn, Mario Proietti, Khaled Dessouky, Daniel A. Lambert, Ronald L. Poulin
ApplicantTechnocom Corporation, Sekhar Kota, Todd Crick, Michael Dunn, Mario Proietti, Khaled Dessouky, Daniel A. Lambert, Ronald L. Poulin, Incode Telecom Group, Inc.
Export CitationBiBTeX, EndNote, RefMan
External Links: CIPO, Espacenet
Embedded wireless location validation benchmarking systems and methods
CA 2660910 A1
Abstract
Systems, methods and software are described for benchmarking the location determination capabilities of a wireless communications network (110). A mobile communications device (115)is configured to receive data identifying a reference location for the device (105). A communication network, communicatively coupled with the mobile communications device (115), calculates a computed location for the device using an alternative location determination technique. The reference location and computed location may be determined for any number of additional devices (105), as well. The accuracy and reliability of a system may then be assessed by comparing one or more computed locations with associated reference locations. The latency attributable to the calculation of one or more computed locations may also be determined.
Claims(36)
1. A telecommunications system for benchmarking location determination capabilities for a mobile communications device in a communications network, the system comprising:
the mobile communications device configured to:
receive a first set of data identifying location of the device, the location identified with a first technique;
generate a second set of data formatted to trigger the communications network to identify the location of the device using a second technique; and transmit one or more communications signals comprising the first set of data and the second set of data;
the communications network communicatively coupled with the mobile communications device, and configured to:
receive the second set of data through at least a subset of the one or more communications signals;
calculate location of the device using the second technique; and generate a third set of data representative of the location identified using the second technique; and a location processing server communicatively coupled with the communications network, and configured to:
store at least a subset of the first set of data and at least a subset of the third set of data in a data store associated with the location processing server.
2. The system of claim 1, wherein the mobile communications device is further configured to:
identify a variable interval to be used between a plurality of transmissions of the second set of data, each interval corresponding to the earlier of:
a time interval metric; and a distance movement metric.
3. The system of claim 1, wherein the mobile communications device is further configured to:

analyze the first set of data to determine whether the first set of data comprises location information above a threshold accuracy; and transmit the second set of data based at least in part on the determination that the first set of data comprises location information above the threshold accuracy.
4. The system of claim 1, wherein the second technique comprises a time difference of arrival technique.
5. The system of claim 1, wherein, the second technique comprises an assisted global positioning system technique; and the communications system further comprises an assistance server configured to produce calculations on global positioning system satellite data and to transmit at least a subset of the calculations to the mobile communications device.
6. The system of claim 1, wherein the first set of data and the third set of data represent a location of the device at a substantially same time.
7. The system of claim 6, wherein the location processing server is configured to associate the first set of data and the third set of data to identify accuracy of the calculated location.
8. The system of claim 1, wherein, the mobile communications device is configured to associate a first time with the transmission of the second set of data;
the communications network is configured to associate a second time with the generation of the third set of data; and the location processing server is configured to associate the first time with the second time to identify a period of latency attributable to the calculated location.
9. The system of claim 1, wherein, the mobile communications device is configured to transmit at least a subset of the one or more communications signals comprising the second set of data by making a test call; and the test call is made to a test telephone number configured to trigger the communications network to calculate location of the device using the second technique and generate a third set of data, without notifying a public safety answering point.
10. The system of claim 1, wherein, the mobile communications device is configured to transmit at least a subset of the one or more communications signals comprising the second set of data by making a test call;
the test call is made to an emergency telephone number; and the communications network is configured to identify the test call as a test, and fail to notify the public safety answering point about the test call based at least in part on the identification as a test.
11. The system of claim 1, wherein, the mobile communications device is configured to transmit at least a subset of the one or more communications signals comprising the second set of data by generating a location trigger message the location trigger message is configured to trigger the communications network to calculate location of the device using the second technique and generate a third set of data, without notifying a public safety answering point.
12. The method of claim 1, wherein the mobile communications device is configured to generate and transmit the data during regular and customary use of the device by a user who regularly and customarily uses the device primarily for purposes of voice or data communication.
13. The system of claim 1, wherein the communications network comprises the location processing server.
14. The system of claim 1, wherein the first set of data comprises global positioning system coordinates.
15. A method of benchmarking location determination capabilities with a mobile communications device, the method comprising:
receiving a first set of data identifying location of the device, the location identified with a first technique;

generating a second set of data associated with the first set, the second set formatted to trigger a communications network to calculate location of the device using a second technique; and transmitting one or more communications signals comprising the first set of data and the second set of data.
16. The method of claim 15, further comprising:
identifying a variable interval between a plurality of transmissions of at least a subset of the second set of data, the interval corresponding to a time interval.
17. The method of claim 15, further comprising:
identifying a variable interval between a plurality of transmissions of at least a subset of the second set of data, the interval corresponding to a distance movement metric identified using the first set of data.
18. The method of claim 15, further comprising:
identifying a variable interval between a plurality of transmissions of at least a subset of the second set of data, the interval corresponding to a location metric identified by referencing a location identified using the first set of data.
19. The method of claim 15, further comprising:
analyzing the first set of data to determine whether the first set of data comprises location information above a threshold accuracy, wherein the transmitting the second set of data is in response to the determination that the first set of data comprises location information above the threshold accuracy.
20. The method of claim 15, further comprising:
associating an identifier with the first set of data and the second set of data, wherein the identifier comprises a selection from the group consisting of a telephone number, a timestamp, a dialed telephone number, a cell identification number, and any combination thereof.
21. The method of claim 15, wherein the transmitting the second set of data comprises:

initiating a test call to a test telephone number configured to trigger the communications network to calculate location of the device using the second technique and generate a third set of data without notifying the public safety answering point.
22. A computer program embodied on at least one computer readable medium, the computer program comprising instructions executable by a mobile communications device to:
receive a fust set of data identifying the location of the device, the location identified with a first technique;
generate a second set of data associated with the first set, the second set formatted to trigger a communications network to calculate location of the device using a second technique; and transmit one or more communications signals comprising the first set of data and the second set of data.
23. A method of benchmarking location determination capabilities in a communications network, the method comprising:
receiving, from the device, a first set of data identifying a reference location of the device, the reference location identified with a first technique;
receiving, from a communications network, a second set of data identifying a computed location of the device, the computed location calculated by the communications network using a second technique; and associating the first set of data with the second set of data.
24. The method of claim 23, wherein the first set of data and the second set of data represent a location of the device at a substantially same time.
25. The method of claim 23, further comprising:
determining an accuracy of the computed location in relation to the reference location by comparing the first set of data and the second set of data.
26. The method of claim 23, further comprising:
determining a latency for the computed location calculation by comparing a timestamp of the first set of data and a timestamp of the second set of data.
27. The method of claim 23, further comprising:

receiving a test call from the mobile communications device; and triggering, based at least in part on the received test call, the communications network to generate the second set of data.
28. The method of claim 27, further comprising:
identifying the received test call as a test; and failing to notify the public safety answering point about the test call based at least in part on the identifying step.
29. A method for benchmarking location determination capabilities in a communications network utilizing a data store, the method comprising:
receiving, from each device of the plurality, a first set of data identifying a reference location of the each device, the reference location identified with a first technique;
receiving, from a communications network, a second set of data for at least a subset of plurality identifying a computed location, the computed location calculated by the communications network using a second technique;
associating each received second set of data with a selected first set of data, wherein each associated sets of data identify location of the same device at a substantially same time; and storing the associated sets of data.
30. The method of claim 29, further comprising:
determining an accuracy of the second technique by comparing the locations identified by the associated sets of data.
31. The method of claim 30, further comprising:
providing a summarized report of the determined accuracy of the second technique for a subset of the associated sets of data.
32. The method of claim 31, wherein providing the summarized report comprises:
providing a summarized report on an interface that is accessed from and distributed to a remote location.
33. The method of claim 31, wherein providing the summarized report comprises:

transmitting image data comprising a map illustrating an accuracy metric associated with at least a subset of the computed locations.
34. The method of claim 30, further comprising:
weighting a first location differently than a second location in determining the accuracy of the second technique.
35. The method of claim 29, further comprising:
determining a latency of the second technique by comparing one or more timestamps for the associated sets of data.
36. The method of claim 29, further comprising:
determining a yield of the second technique by identifying unselected first sets of data.
Description  (OCR text may contain errors)

8 [0001] Priority benefit claims for this application are made in the accompanying 9 Application Data Sheet, Request, or Transmittal (as appropriate, if any). To the extent permitted by the type of the instant application, this application incorporates by reference for all purposes 11 the following applications, all owned by the owner of the instant application:
12 U.S. Provisional Application Serial No. 60/804,173 (Docket No. LS.2006.09), filed June 13 8, 2006, first named inventor Geoffrey Furnish, and entitled MORPHING FOR
14 GLOBAL PLACEMENT USING INTEGER LINEAR PROGRAMMING;
and 16 PCT Application Serial No. PCT/US2006/025294 (Docket No. LS.2006.01B), filed June 17 28, 2006, first named inventor Geoffrey Furnish, and entitled METHODS AND
18 SYSTEMS FOR PLACEMENT.

23 [0002] Field: Advancements in integrated circuit design, including placement and 24 routing of elements in a Computer Aided Design (CAD) context, are needed to provide improvements in performance, efficiency, and utility of use.

27 [0003] Related Art: Unless expressly identified as being publicly or well known, 28 mention herein of techniques and concepts, including for context, defmitions, or comparison 29 purposes, should not be construed as an admission that such techniques and concepts are previously publicly known or otherwise part of the prior art. All references cited herein (if any), 31 including patents, patent applications, and publications, are hereby incorporated by reference in 32 their entireties, whether specifically incorporated or not, for all purposes.

3 100041 The invention may be implemented in numerous ways, including as a process, 4 an article of manufacture, an apparatus, a system, a composition of matter, and a computer readable medium such as a computer readable storage medium or a computer network wherein 6 program instructions are sent over optical or electronic communication links. In this 7 specification, these implementations, or any other form that the invention may take, may be 8 referred to as techniques. The Detailed Description provides an exposition of one or more 9 embodiments of the invention that enable improvements in performance, efficiency,* and utility of use in the field identified above. The Detailed Description includes an Introduction to 11 facilitate the more rapid understanding of the remainder of the Detailed Description. The 12 Introduction includes Example Embodiments of one or more of systems, methods, articles of 13 manufacture, and computer readable media in accordance with the concepts described herein.
14. As is discussed in more detail in the Conclusions, the invention encompasses all possible modifications and variations within the scope of the issued claims.

1 Brief Description of Drawings 3 [0005] Fig. I is a flow diagram illustrating selected details of an embodiment of 4 placing, routing, analyzing, and generating fabrication data for any portion of an integrated circuit according to a Simultaneous Dynamical Integration (SDI)-based flow.

7 [0006] Fig. 2 is a flow diagram illustrating selected details of an embodiment of placing 8 and routing any portion of an integrated circuit according to an SDI-based flow.

[0007] Fig. 3A is a flow diagram illustrating selected details of an embodiment of 11 global placement according to SDI-based modeling and simulation.

13 100081 Fig. 3B is a flow diagram illustrating selected details of an embodiment of 14 initial placement operations for global placement.
16 100091 Fig. 3C is a flow diagram illustrating selected details of an embodiment of 17 density field based force component computation.

19 [0010] Fig. 3D is a flow diagram illustrating selected details of an embodiment of gate density accumulation.

22 [0011] Fig. 3E is a conceptual diagram illustrating an embodiment of two-point 23 interpolation of node mass to grid points.

[0012] Fig. 3F is a conceptual diagram illustrating an embodiment of three-point 26 interpolation of node mass to grid points.

28 100131 Fig. 3G is a conceptual diagram illustrating an embodiment of applying 29 boundary grid point masses to interior grid points:
31 100141 Fig. 3H is a flow diagram illustrating selected details of an embodiment of 32 digital density filtering.

34 100151 Fig. 31 is a flow diagram illustrating selected details of an embodiment of interpolating gate fields to nodes.

1.[0016] Fig. 4 is a flow diagram illustrating selected details of an embodiment of SDI-2 based modeling and simulation.

4 100171 Fig. 5A is a flow diagram illustrating selected details of a first embodiment of resource reconciliation, as a first example of legalization.

7 [0018] Fig. 5B is a flow diagram illustrating selected details of a second embodiment of 8 resource reconciliation, as a second example of legalization.

[0019] Fig. 5C is a flow diagram illustrating selected details of an embodiment of 11 partitioning.

13 [0020] Fig. 6 is a flow diagram illustrating selected details of an embodiment of 14 detailed placement (also referred to as detail placement elsewhere herein).
16 100211 Fig. 7A is a flow diagram illustrating selected aspects of an embodiment of 17 delay path reduction and minimization, as an example of timing closure.

19 [0022] Fig. 7B illustrates a conceptual view of selected elements of an embodiment of timing-driven forces.

22 [00231 Fig. 7C illustrates a spatial organization of the driver and the coupled loads of 23 Fig. 7B.

100241 Fig. 7D illustrates an embodiment of Net Boundary Box (NBB) estimation of 26 routing to cover the driver and the loads of Fig. 7C.

28 100251 Fig. 7E illustrates an embodiment of a rectilinear Steiner Route Tree (SRT) 29 estimation to cover the driver and loads of Fig. 7C.
31 [0026] Fig. 7F illustrates an embodiment of estimated RC parasitics associated with the 32 RST of Fig. 7E.

34 [0027] Figs. 8A and 8B collectively are a flow diagram illustrating selected details of an embodiment of an integrated circuit Electronic Design Automation (EDA) flow using one or 1 more techniques including SDI-directed global placement, legalization, legalization-driven 2 detailed placement, timing optimization, and routing.

4 [0028] Fig. 9 illustrates selected details of an embodiment of manufacturing integrated circuits, the circuits being designed in part based on SDI-directed design techniques.

7 [0029] Fig. 10 illustrates selected details of an embodiment of a computer system to 8 execute EDA routines to perform SDI-directed place and route operations.

[0030] Fig. 11 illustrates an embodiment of an SDI-based detailed placement flow.

12 [0031] Figs. 12A and 12B illustrate concepts relating to an embodiment of netlist 13 elaboration.

[0032] Fig. 13 illustrates an embodiment of detailed placement of a Q-block.

17 [0033] Fig. 14 illustrates an embodiment of an additional pass of detailed placement of 18 a Q-block.

[0034] Fig. 15A illustrates a form of the form-level net of Fig. 12A. In this view the 21 resource-level nodes are shown internal to the form.

23 [0035] Fig. 15B illustrates another form that uses different resources to implement the 24 same function as the form of Fig. 15A. In at least one embodiment, the form of Fig. =15B is substituted for the form of Fig. 15A through a morphing process.

27 [0036] Fig. 15C illustrates a hierarchy of nodes, having hierarchical nodes, form-level 28 nodes, and resource-level nodes.

[0037] Fig. 15D illustrates selected nets connected between selected nodes of Fig. 15C.

32 [0038] Fig. 15E illustrates the nodes and nets of Fig. 15D after augmentation with 33 resource-level nodes.

[0039] Fig. 16A illustrates the supply and demand for resources R1 through R6 36 corresponding to target functions of an integrated circuit design having a first selection of forms 1 for the target functions. For at least some of the resources, the demand exceeds the available 2 supply.

4 100401 Fig. 16B illustrates the supply and demand for resources R1 through R6 for the same target functions, but using a second selection of forms for the target functions obtained by 6 morphing certain forms to use different resources. For each of the resources shown, the demand 7 is. less than or equal to the supply.

9 [0041] Fig. 17A illustrates an example circuit with a plurality of critical paths.
11 [0042] Fig. 17B illustrates example computations relating to an embodiment of CPF
12 scoring.

14 100431 Fig. 18 illustrates an embodiment of a cascade of buffers of increasing drive strength.

17 100441 Fig. 19 illustrates example computations relating to an embodiment of SDF
18 calculation.

100451 Fig. 20A illustrates an overall procedural control flow in an illustrative relative 21 slack embodiment.

23 [0046] Fig. 20B illustrates the adjustment of timing driven weight in the relative slack 24 embodiment of Fig. 20A.
26 [0047] Fig. 21A illustrates a driver in the interior of a net bounding box region.

28 [0048] Fig. 21B illustrates a driver to one side of a'net bounding box region.

[0049] Figs. 22A and 22B illustrate an example circuit excerpt before and after 31 processing according to an embodiment of timing driven buffering and resizing for an array 32 architecture.

34 [00501 Fig. 23 illustrates a flow diagram of an integrated circuit design flow including an embodiment of processing in accordance with an embodiment of timing driven buffering and.
36 resizing for an array architecture.

2 [00511 Fig. 24A illustrates a top-level view of an embodiment of timing driven 3 buffering and resizing for an array architecture.

[0052] Fig. 24B illustrates a detail view of selected details of an embodiment of timing 6 driven resizing for an array architecture.

8 [0053] Figs. 25A and 25B illustrate an example route tree as processed by an 9 embodiment of segmenting a portion of the route for timing driven buffering and resizing.
11 [00541. Fig. 26 illustrates example results of an embodiment of logic replication and 12 tunneling for an array architecture.

14 [0055] Fig. 27 illustrates a control flow in an illustrative embodiment, as used for density modification.

17 100561 Fig. 28 illustrates a control flow of an illustrative embodiment, as used to 18 determine the Steiner-cuts congestion term on the SDI grid.

100571 Fig. 29 illustrates procedures of an illustrative embodiment, showing creation of 21 a congestion array.

23 [0058] Fig. 30 illustrates procedures of an illustrative embodiment, showing calculation 24 of a final congestion density enhancement array.
26 [0059] Fig. 31 illustrates an embodiment of a processing flow for node tunneling out of 27 exclusion zones in an SDI-based integrated circuit design flow.

29 100601 Fig. 32 illustrates an embodiment of SDI-related force calculations in a tunneling congestion relief context.

32 [0061] Fig. 33 illustrates an embodiment of evaluation of tunneling transition criteria.

34 [0062] Fig. 34A illustrates an example clock tree suitable for input to a Clock Tree Synthesis (CTS) tool for Structured Array Fabric (SAF)-based design flows.

1 [0063] Fig. 34B illustrates an example clock tree output from the CTS tool operating on 2 the input illustrated in Fig. 34A.

4 100641 Fig. 34C illustrates an example clock tree network.

6 [0065] Fig. 35 illustrates an overview of an embodiment of a CTS flow.

8 100661 Fig. 36A illustrates an example die floorplan of a design having embedded 9 Random Access Memory (RAM) or other Intellectual Property (IP) blocks.
11 [0067] Fig. 36B illustrates a portion of a clock net iri, a context of a portion of Fig. 36A.

13 [0068] Fig. 37A illustrates an example of timing driven pin swapping.

100691 Fig. 37B illustrates an example of effects of clock tree partitioning.

17 100701 Fig. 38 illustrates an analysis according to an embodiment of clock domain and 18 sub-domain partitioning.

3 [0071] A detailed description of one or more embodiments of the invention is provided 4 below along with accompanying figures illustrating selected details of the invention. The invention is described in connection with the embodiments. It is well established that it is 6 neither necessary, practical, or possible to exhaustively describe every embodiment of the 7 invention. Thus the embodiments herein are understood to be merely exemplary, the invention 8 is expressly not limited to or by any or all of the embodiments herein, and the invention 9 encompasses numerous alternatives, modifications and equivalents. To avoid monotony in the exposition, a variety of word labels (including but not limited to: first, last, certain, various, 11 further, other, particular, select, some, and notable) may be applied to separate sets of 12 embodiments; as used herein such labels are expressly not meant to convey quality, or any form 13 of preference or prejudice, but merely to conveniently distinguish among the separate sets. The 14 order of some operations of disclosed processes is alterable within the scope of the invention.
Wherever multiple embodiments serve to describe variations in process, method, and/or program 16 instruction features, other embodiments are contemplated that in accordance with a 17 predetermined or a dynamically determined criterion perform static and/or dynamic selection of 18 one of a plurality of modes of operation corresponding respectively to a plurality of the multiple 19 embodiments. Numerous specific details are set forth in the following description to provide a thorough understanding of the invention. These details are provided for the purpose of example 21 and the invention may be practiced according to the claims without some or all of these specific 22 details. For the purpose of clarity, technical material that is known in the technical fields related 23 to the invention has not been described in detail so that the invention is not unnecessarily 24 obscured.

28 [0072] This introduction is included only to facilitate the more rapid understanding of 29 the Detailed Description; the invention is not limited to the concepts presented in the introduction (including explicit examples, if any), as the paragraphs of any introduction are 31 necessarily an abridged view of the entire subject and are not meantto be an exhaustive or 32 restrictive description. For example, the introduction that follows provides overview 33 information limited by space and organization to only certain embodiments.
There are many 34 other embodiments, including those to which claims will ultimately be drawn, discussed throughout the balance of the specification.

1 ..
2 [0073] As described herein, "dynamic time-evolving SDI" refers to SDI
techniques for 3 the modeling and simulation of elements for integrated circuit placement and routing. Dynamic 4 time-evolving SDI includes applying principles of Newtonian mechanics to an "analogy-system"
based on a netlist that is a specification of the integrated circuit as part of an EDA flow (such as 6 during physical design development of the integrated circuit). In some usage scenarios the 7 analogy-system (often referred to simply as "system") includes a single point particle 8 corresponding to each device in the netlist. The system further includes a set of one or more 9 forces acting on each of the particles, in certain embodiments computed as a weighted sum.
Various numerical integration techniques are used to apply Newton's second law of motion to 11 the system, forming a time-evolving representation of the system in state-space. In other words 12 a simulation determines paths of the particles in a plane (or three dimensions). Then resultant 13 locations of the point particles are mapped back into resultant placements of the corresponding 14 devices, thus providing SDI-directed placements.
16 100741 Using dynamic time-evolving SDI, elements of the system are pushed 17 simultaneously forward in time through a smooth integration in which the model for the system 18 dynamics is an abstraction utilizing continuous variables and simultaneous exploration.
19 Departures from idealizations of continuous variables and simultaneity are artifacts of techniques for solving the system of coupled simultaneous governing equations, such as that 21 occur with numerical integration on a digital computer. In such digital computer 22 implementations, the departures are limited to specifiable tolerances determined by the quality of 23 result goals and economic considerations (such as available solution time, supply of computing 24 power available, and other similar constraints).
26 100751 The system forces include attractive and spreading components, used to model 27 effects of interconnect, resource usage (such as device area), and to drive various optimizations 28 (such as timing closure). Some of the system forces are directly expressed as functions of the 29 positions of other devices (such as attractive forces between connected devices), some of the forces are indirect functions of the positions of other devices and are computed by way of 31 various fields (such as one or more density fields), and some of the forces that act on some of the 32 devices are independent of the positions of the other devices in the system. Computing selected 33 forces as fields in certain embodiments affords more computational efficiency.

[0076] SDI-directed placement is useful in various integrated circuit design flows and 36 related implementation architectures, including full custom, semi-custom, standard cell, 1 structured array, and gate array design flows and related implementation architectures. Several 2 variations in the context of structured array design flows enable efficient processing of numerous 3 constraints imposed by the paritally predetermined nature of the arrays: A
library of composite 4 cells or "morphable-devices" is provided to a synthesis tool (such as Synopsys Design Compiler or any other simila'r tool). The morphable-devices are used as target logic elements by the 6 synthesis tool to process a netlist (either behavioral or gate-level) provided by a user. A
7 synthesis result is provided as a gate-level netlist (such as a Verilog gate-level netlist) expressed 8 as interconnections of morphable-devices. The synthesis tool assumes the morphable-devices 9 represent the final implementation, subject to device sizing to resolve circuit timing issues.
11 [0077] The morphable-devices are, however, subject to additional modifications in the 12 structured array design flow context (see "Structured Arrays", elsewhere herein), as each 13 morphable-device may be implemented in a plurality of manners using varying resources of the 14 structured array. During phases of resource reconciliation (where attempts are made to satisfy required resources with locally available resources), one or more of the morphable-devices may 16 be transformed to a logically equivalent implementation. For example, an AND function may be 17 implemented by an AND gate, by a NAND gate and an Inverter, or by any other equivalent 18 formulation. Functionally equivalent alternatives are grouped according to implementation 19 function, and individual realizations within a given function are referred to as "forms". Thus 20. any morphable-device may be implemented as any instance of any form.having an equivalent 21 function. Subsequent operations account for variation between logically equivalent forms (such 22 as differences in area, timing behavior, routing resources used or provided, and any other 23 characteristic distinguishing one form from another). Operations relating to interchanging 24 implementations of morphable-devices to satisfy structured array resource limitations and underlying topology, as well as meeting spatial organization constraints; are termed "morphing".

27 [0078] The SDI-directed placement, in various contexts including structured array 28 design flows, includes several phases: global placement,.legalization, and detailed placement.
29 Global placement in certain embodiments provides a first-cut location for each morphable-device in a netlist. The first-cut location is subject to additional refinement by subsequent 31 processing (including legalization and detailed placement). Global placement is considered 32 complete when a configuration is attained that is determined to be sufficiently close to legality to 33 proceed to legalization, i.e. the configuration is likely to be reducible to a satisfactory 34 implementation. Legalization starts with the global placement configuration and produces a fmal configuration in which demand for resources in every region is determined to be no greater 36 than corresponding supply in each region. Detailed placement starts with the legalized 1 placement configuration and assigns every element implementing a morphable-device to specific 2 resources in an implementation (such as a set of specific resource-slots in a structured array 3 architecture). Some simple functions may have degenerate forms requiring only a single 4 resource instance, but more complex forms are composite, requiring more than one physical resource instance plus internal interconnect to correctly implement the function.

7[0079] Various morphing and similar transformation operations may be used in any 8 combination of phases including global placement, legalization, and detailed placement, 9 according to various embodiments. Morphing techniques used in one phase may be distinct or may be substantially similar to morphing techniques used in another phase, varying according to 11 implementation. In some embodiments, different processing phases proceed with morphing 12 operations operating according to respective morphing classes, i.e. a set of morphing classes for 13 global placement, a set of morphing classes for legalization, and set of morphing classes for 14 detailed placement. The morphing classes according to phases may be distinct or may be substantially similar to one another, according to embodiment.

17 100801 SDI-directed placement operations, when applied in a structured array design 18 flow context, may include specialized forces relating to various "morphing classes" representing 19 categories of structured array resources .or related functionality. For example, resources for combinational circuitry may be grouped in a combinational morphing class, while resources for 21 sequential circuitry may be grouped in a sequential morphing class. In some situations 22 morphable-devices are restricted to implementation by resources belonging to a limited set of 23 morphing-classes. Continuing with the example, combinational logic morphable-devices may 24 be restricted to implementation by resources of the combinational morphing class, while sequential logic morphable-devices may be restricted to implementation by sequential morphing 26 class elements. One or more specialized forces relating to each of the morphing classes may be 27 used during global placement to effect spreading of morphable-devices according to 28 ' corresponding morphing classes. Continuing with the example, a combinational spreading force 29 may be selectively applied to combinational logic morphable-devices, while a sequential spreading force may be selectively applied to sequential logic morphable-devices. In certain 31 embodiments, it is useful to subject all devices in the netlist (whether morphable or not) to a 32 single spreading force that acts to drive the circuit toward a density that is sustainable on the 33 implementation architecture, and augment the spreading force with the specialized resource-34 class-specific spreading forces to further tune the placement.

1 Structured Arrays 3 [0081] In some usage scenarios structured arrays are implementation vehicles for the 4 manufacture of integrated circuits, as described elsewhere herein.
Structured arrays in certain embodiments include fundamental building blocks (known as "tiles") instantiated one or more 6 times across an integrated circuit substrate to form a Structured Array Fabric (SAF). In some 7 embodiments structured arrays are homogeneous (i.e. all of the tiles are identical), while in some 8 embodiments the arrays are heterogeneous (i.e. some of the tiles are distinct with respect to each 9 other). Heterogeneity may occur as a result of tile type, arrangement, or other differences.
Irregardless of tile number and arrangement, however, the SAF tiles are fixed (i.e. prefabricated) 11 and independent of any specific design implemented thereupon.

13 [0082] SAF tiles, according to various embodiments, may include any combiriation of 14 fully or partially formed active elements (such as transistors, logic gates, sequential elements, and so forth), as well as fully or partially formed passive elements (such as metallization serving 16 as wires and vias providing interconnection between layers of metal). In some SAF
17 embodiments "lower" layers of interconnect are included in SAF tiles (as the lower layers are 18 formed relatively early in fabrication), while "upper" layers of interconnect are specific to a 19 design (as the upper layers are formed relatively later in fabrication).
Such SAF embodiments permit the lower prefabricated (and thus non-customizable) layers to be shared between different 21 design implementations, while the higher/customizable layers provide for design-specific 22 specialization or personalization.

24 100831 SAF structures may be used to construct an entire chip, or may constitute only a portion of the floorplan of an encompassing circuit, allowing for design variation. The size of 26 the SAF tiles is generally irrelevant to design flows, and a tile may be as small and simple as a 27 single inverter or as large and complex as a Randomly Accessible read-write Memory (RAM) 28 block or other large-scale Intellectual Property (IP) element.

[0084] EDA flows targeting designs based on structured array technology (such as the 31 SDI-directed flow described elsewhere herein) account for the predetermined nature of the array, 32 from gate-level netlist synthesis through subsequent'implementation processing including layout 33 of cells and interconnect. Such EDA flows enable realizing advantages of manufacture of 34 integrated circuits including SAF tiles. The advantages include reduced manufacturing cost, as fewer mask layers (for example those corresponding to upper layers of interconnect) are 1 customized for each design, as well as reduced characterization cost (for example by re-use of 2 known structures such as the SAF tiles).

3 [0085] This introduction concludes with a collection of exemplary illustrative 4 combinations, including some explicitly enumerated as "ECs", that tersely summarize illustrative systems and methods, in accordance with the concepts taught herein. Each of the 6 illustrative combinations or ECs highlights various combinations of features using an informal 7 pseudo-claim format. These compressed descriptions are not meant to be mutually exclusive, 8 exhaustive, or restrictive, and the invention is not limited to these highlighted combinations. As 9 is discussed in more detail in the Conclusion section, the invention encompasses all possible modifications and variations within the scope of the issued claims:

13 General Exemplary Illustrative Combinations [0086] A structured array embodiment for physical design (layout) flow for structured 16 arrays, including global placement, followed by legalization, followed by detail placement, and 17 wherein the global placement is performed using simultaneous continuous integration modeling.
18 The structured array embodiment, wherein detail placement further includes integrated morphing 19 of forms within each Q-Block, to improve routability, timing, or other relevant physical design quality of result metrics, the morphing of forms being a transformation of the netlist wherein the 21 implementation form of a cell is exchanged with any member of it's functional equivalency 22 class.

24 [0087] A T-cycling embodiment, based upon the structured array embodiment, further wherein routability is improved, based on relevant physical desigri quality of result metrics, 26 through the application of thermodynamic compression cycles (T-cycling).
The T-cycling 27 embodiment, wherein the T-cycling is manually driven. The T-cycling embodiment, wherein 28 the T-cycling is automatically driven. The T-cycling embodiment, wherein legalization is 29 accomplished by quantization, a process consisting of defming a grid of quantization windows (Q-Blocks), the form-level circuit nodes are binned into said Q-Blocks according to their global 31 placement assigned coordinates, and morphing operations are performed to balance the demand 32 for resources in the structured array against the supply thereof within each Q-Block.

34 100881 The structured array embodiment, wherein legalization additionally includes continued time stepping with forces modeling resource depletion. The structured array 36 embodiment, wherein legalization additionally includes continued time stepping with modified 1 interaction strengths of various partial density fields. The structured array embodiment, wherein 2 legalization additionally includes partitioning between Q-Blocks. The structured array 3 embodiment, wherein legalization specifically includes migration of nodes between abutting Q-4 Blocks, with integrated morphing to drive the system toward a state in which all nodes within a Q-Block are implemented with particular forms such that the sum of each resource type required 6 by the nodes in a Q-Blocks is less than or equal to the supply of resources of that same type 7 within the Q-Block. The structured array embodiment, wherein legalization additionally 8 includes recursive bi-sectioning. The structured array embodiment, wherein legalization is 9 pursued at increasingly finer granularities (smaller sized Q-Blocks) to improve the quality of the resulting placement.

12 100891 A standard cell embodiment for physical design flow for standard cells 13 including global placement, followed by legalization, followed by detail placement, and wherein 14 the global placement is performed using simultaneous continuous integration modeling. The standard cell embodiment, wherein the spreading force is computed directly from the standard 16 cell area of the comprising forms in the netlist. The standard cell embodiment, wherein driver 17 sizing is accomplished via morphing. The standard cell embodiment, wherein legalization is 18 performed with respect to area demand referred to a tiled grid of Q-Blocks.
The standard cell 19 embodiment, wherein legalization is performed with respect to configurable windows defmed by a recursive bisectioning partitioner. The standard cell embodiment, wherein detail placement 21 further also employs SDI to compute actual slot locations for each/some standard cell in the 22 netlist. The standard cell embodiment, wherein detail placement further also employs an in-built 23 partitioner to resolve placement failures arising from global placements that cannot be 24 effectively solved at a given Q-Block size due to the uneven sizes of standard cells.

27 Integer-Linear-Programming-Based Morphing Exemplary Illustrative Combinations 29 [0090] EC1) A method (referred to herein as "morphing") of improving EDA
physical design quality of results for structured ASIC logic arrays by exchanging the form-level instances 31 in the structural gate level netlist, with functionally equivalent alternate forms using different 32 resources; the forms being apportioned to netlist nodes in a predetermined manner.

34 [0091] EC2) The method of EC35, wherein:.
form instances are prioritized for morphing according to their footprint onto 36 oversubscribed resources;

1 alternate forms are prioritized according to an objective function; and 2 the best scoring available alternate form is "taken".

4 [0092] EC3) The method of EC2, wherein: the objective function evaluates resource criticality both as a function of the ratio of demand to supply, and also of supply ratios between 6 different resources. In other words, if two resources are both exactly 50%
utilized, the one with 7 snialler supply is deemed more critical.

9 [0093] EC4) The method of EC35, wherein: additionally inverters are temporarily removed from the collection in order to improve the accessibility of preferred solution states to 11 the remaining nodes. Once other nodes are morphed, inverters are reinserted, morphing as 12 needed.

14 100941 EC5) The method of EC35, wherein: any forms impinging on full resources are temporarily removed, in order to improve the accessibility of preferred solution states to the 16 remaining nodes. Once the other nodes are morphed, the first collection of temporarily removed 17 nodes, is reinserted morphing as needed.

19 [0095] EC6) The method of EC35, wherein: any ofthe above techniques:
deterministic interchange (EC2), inverter removal (EC4) and generalized element removal (EC5), are used in 21 any order or combination, including omission.

23 [0096] EC7) The method of EC35, wherein: the morphing problem is solved by 24 constructing an integer linear program, the integer linear program including constraint equations and constraint inequalities, an objective function, and using an ILP solver to solve and optimize 26 the system so formulated; the ILP solver generating form quotas.

28 100971 EC8) The method of EC7, wherein: the constraint equations enforce the 29 conservation of instances implementing each function in the library that is represented in the netlist.

32 [0098] EC9) The method of EC7, wherein: the constraint inequalities guarantee that 33 resources cannot be oversubscribed, and that the usage of each resource is integral and non--34 negative.

1 100991 EC 10) The method of EC7, wherein: the constraint inequalities contain 2 additional "combination constraints" which support the case of reconfigurable resources which 3 can contribute to different resource pools.

101001 EC 11) The method of EC7, wherein: the objective function is trivial 6 (coefficients are constant).

8 101011 EC12) The method of EC7, wherein: the objective function is gate efficiency.

[0102] EC13) The method of EC7, wherein: the objective function is a measure of 11 circuit performance.

13 [0103] EC14) The method of EC7, wherein: the objective function is a measure of 14 levels of logic.
16 [0104] EC15) The method of EC7, wherein: the objective function is a measure of 17 levels of logic on critical paths.

19 [0105] EC16) The method of EC7, wherein: the form quotas obtained from the ILP
solver are apportioned to netlist nodes indiscriniinately.

22 [0106] EC 17) The method of EC7, wherein: the form quotas obtained from the ILP
23 solver are apportioned to netlist nodes on the basis of a configurable nodal priority.

101071 EC 18) The method of EC 17, wherein: the nodal priority is timing slack.

27 101081 EC 19) The method of EC 17, wherein: the nodal priority is a function of the 28 number of critical paths flowing through the node.

[0109] EC20) The method of EC 17, wherein: the nodal priority is a function of the 31 severity of the slack as well as the number of critical paths flowing through the node.

33 [0110] EC21) The method of EC7, wherein: the form apportionment is identically 34 equal to the form quotas obtained from the ILP solver.

1 [0111] EC22) The method of EC7, wherein: the form apportionment is allowed to 2 deviate from the quotas obtained from the ILP solver, so long as the resources in the morphing 3 region are not exhausted.

[0112] EC23) The method of EC17, wherein: the nodal priority is used to order access 6 to fonn apportionment states that deviate from the ILP derived quotas, subject to maintaining a 7 fitting solution.

9 [0113] EC24) The method of EC 17, wherein: a node which cannot get its ideal form due to exhaustion by other higher-priority nodes, takes an alternate form in the same function 11 group, the entire set of nodes being assigned in a single ordered processing loop.

13 101141 EC25) The method of EC17, wherein: any nodes which cannot get their ideal 14 form due to exhaustion by other higher-priority nodes, are held aside in a "trouble queue", processing then proceeding to the next node so that subsequent nodes don't have their ideal 16 forms exhausted by upstream nodes taking these forms even though not their first choice. Once 17 the entire set of nodes has been considered once, the leftover trouble nodes are reprocessed, this 18 time each taking the best available alternate form without regard to impact on downstream 19 (lower priority) nodes.
21 [0115] EC26) The method of EC7, wherein: morphing is applied to the entire netlist to 22 optimize overall structured ASIC size.

24 [0116] EC27) The method of EC7, wherein:
a first fitting size is determined (by linear extrapolation from the resource with the 26 greatest ratio of demand to supply);
27 a next target size is chosen as a percentage of the first fitting size, iterating in this way 28 until a non-fitting size is found; and 29 a range bisecting approach is followed until a fitting logic array size is found such that if the logic array is any smaller, a fitting morph can no longer be achieved.

32 101171 EC28) The method of EC7, wherein: morphing drives toward a configuration 33 where the ratios of demands for resources match the ratios of their supply, i.e., "stoichiometricly 34 balanced".

1 [01181 EC29) The method of EC7, wherein: morphing is applied to the entire netlist to 2 determine whether a gi'ven design can be packed into a given fixed size of structured ASIC.

4 [0119] EC30) The method of EC7, wherein: morphing is applied to a subset of the netlist in order to determine whether or not this subset can be placed within a subset of the area 6 of the structured ASIC.

8 [01201 EC3 1) The method of EC7, wherein: morphing is applied to a spatially 9 decomposed sectioning of the netlist and the structured ASIC, in order to fmd a fitting configuration (a solution without oversubscription of resources) of each subsection of the netlist 11 into the corresponding subsection of the structured ASIC.

13 101211 EC32) The method of EC31, wherein: morphing is applied to one or more of 14 the above subsets, in order to optimize the apportionment of forms in the subset, in order to optimize a specified function.

17 101221 EC33) The method of EC32, wherein: the specified function is gate efficiency.

'19 [0123] EC34) The method of EC32, wherein: the specified function models the performance of the circuit.

22 [0124] EC35) The method of EC32, wherein: the specified function specifically 23 penalizes apportionment of forms that diverges from the apportionment in the synthesis gate 24 level netlist.
26 101251 EC36) The method of EC7, wherein: morphing regions are systematically 27 subdivided into sub-regions, with accompanying spatially-sectioned sub-circuit, which are then 28 individually morphed.

[0126] EC37) The method of EC36, wherein: failing subregion morphs are resolved by 31 reallocation of nodes between the subregions, pursuant to discovery of a circuit node to 32 subregion assignment that is morphing feasible (possible to place).

34 [0127] EC38) The method of EC36, wherein: the subdividing of morphing regions is accomplished via recursive bisectioning in alternating directions so as to ensure preservation of 36 the aspect ratio of the regions every other subdivision operation.

2 101281 EC39) The method of EC36, wherein: the subdividing of morphing regions is 3 accomplished via adaptive mesh refinement techniques (hence no guarantee of preservation of 4 aspect ratio is available).

3 101291 Fig. 1 is a flow diagram illustrating selected details of an embodiment of 4 placing, routing, analyzing, and generating fabrication data for any portion of an integrated circuit according to an SDI-based flow. A representation of all or any portion of the integrated .6 circuit is provided ("Design Description" 120), in certain embodiments including a gate-level 7 netlist, placement constraints, timing requirements, and other associated design specific data.
8 The gate-level netlist may be provided in any proprietary or standard format, or a hardware 9 description language (such as Verilog).
11 101301 A representation of fabrication flow is also provided ("Technology Description"
12 121), in certain embodiments including information relating to fabrication material starting state 13 and manufacturing flow. The fabrication material information may include data describing 14 wafers and any associated predetermined processing on the wafers (for example fabrication of lower layers of devices). The predetermined processing may be associated with transistors, 16 combinational logic gates, sequential logic devices, storage arrays, regular structures, power 17 distribution, clock distribution, routing elements, and other similar portions of active and passive 18 circuitry. The manufacturing flow information may include information relating to physical and 19 electrical design rules and parameters for extraction of parasitic information for analyzing results' during physical design flow processing.

22 101311 Flow begins ("Start" 101) and continues ("Pre-Process" 102), where the design 23 and technology descriptions are parsed and various design-specific data structures are created for 24 subsequent use. The design description in certain embodiments includes a gate-level netlist describing interconnections of devices (morphable-devices, according to some embodiments), as 26 well as constraints specific to implementation of the design (such as timing and placement 27 requirements). The technology description includes information such as library definitions, 28 fabrication technology attributes, and descriptions of manufacturing starting material (for 29 example data describing SAF tile arrangement and composition of active and passive elements).
31 [0132] Physical locations of some or all of the devices are then determined ("SDI Place 32 & Route" 103), i.e. the design is placed, and wiring according to the netlist is determined (i.e.
33 the design is routed). Place and route processing in certain embodiments includes multiple 34 iterations of one or more internal processes (see "Place and Route Flow", elsewhere herein).
The placed and routed design is then analyzed ("Result Analysis" 104), in certain embodiments 36 with one or more analysis tools performing various functions such as parasitic extraction, timing 1 verification, physical and electrical rule checking, and Layout-Versus-Schematic (LVS) formal 2 verification.

4 [0133] Results of the analysis are examined by any combination of automatic (such as software) and manual (such as human inspection) techniques ("OK?" 105). If the results are 6 acceptable, then flow continues ("Yes" 105Y) to produce information to manufacture the design 7 according to the results ("Generate Fabrication Data" 106). The fabrication data varies by 8 embodiment and design flow context, and may include any combination of mask describing 9 data, FPGA switching-block programming data, and FPGA fuse/anti-fuse mapping and programming data. Processing is then complete ("End" 199).

12 101341 If the results are not acceptable, then flow loops back ("No" 105N) to repeat 13 some portion of the place and route operations. In some usage scenarios (not illustrated) one or 14 more modifications to any combination of the design and the technology may be made before repeating some of the place and route operations. For example, synthesis may be repeated (with 16 any combination of changes to functionality as specified by behavioral or gate-level inputs and 17 synthesis commands), a different technology may be chosen (such as a technology having more 18 metal layers), or a different starting material may be selected (such as choosing a "larger"
19 structured array having more SAF tiles).
21 [0135] Processing functions ("Pre-Process" 102, "SDI Place & Route" 103, "Result 22 Analysis" 104, "OK?" 105, and "Generate Fabrication Data" 106) are responsive to various 23 instructions and input data ("Commands and Parameters" 130), according to various 24 embodiments. The effects of the commarids and parameters on the processing are represented conceptually in the figure (arrows 102C, 103C, 104C, 105C, and 106C, respectively). In 26 various embodiments information is communicated between the processing functions (and other 27 processing elements not illustrated) in various forms and representations, as shown conceptually 28 ("Working Data" 131 and associated arrows 10213, 10313, 104D, and 106D, respectively). The 29 working data may reside in any combination of processor cache, system memory, and non-volatile storage (such as disks), according to implementation and processing phase.

32 [01361 The illustrated placement, route, and analysis processing is applied, in various 33 embodiments, to integrated circuits implemented in various design flows or contexts, including 34 application specific, structured array (homogenous and heterogeneous varieties), mask-definable gate array, mask-programmable gate array, Field-Programmable Gate Array (FPGA), and full 36 custom. The processing may be applied to an entire integrated circuit, or one or more portions I or sub-sections of an integrated circuit, according to various usage scenarios. For example, an 2 otherwise full custom integrated circuit may include one or more regions of standard cells, and 3 each of the standard cell regions may be processed according to all or portions of the illustration.
4 For another example, an Application Specific Integrated Circuit (ASIC) may include some regions of standard cells and other regions of SAF tiles. Any combination of the standard cell 6 and SAF tile regions may be processed according to all or portions of the illustrated flow. These 7 and all similar variations are contemplated.

PLACE AND ROUTE FLOW

12 101371 Fig. 2 is a flow diagram illustrating selected details of an embodiment of placing 13 and routing any portion of an integrated circuit, according to an SDI-based flow, such as 14 operations referred to elsewhere herein ("SDI Place & Route" 103, of Fig.
1, for example).
Overall the flow includes determining approximate (i.e. subject to subsequent refmement) 16 locations for devices, reconciling resources, determining nearly fmal locations and 17 implementations for the devices, minimizing critical delay paths, and wiring the devices 18 according to a netlist. In certain embodiments each of the elements of the flow includes internal 19 functions to determine acceptability of results, iterate as necessary to improve the results, and to direct feedback to earlier processing functions of the flow as needed.

22 [0138] Processing begins ("Start" 201), in certain embodiments by receiving one or 23 more data structures and files describing a netlist having devices and associated connectivity, 24 along with manufacturing technology information. The structures and files may result from parsing design and technology information ("Pre-Process" 102, of Fig. 1, for example).
26 Approximate locations for the devices of the netlist are then determined ("SDI Global 27 Placement" 202) according to the netlist, the technology, and commands/parameters (such as 28 those from "Commands and Parameters" 130, of Fig. 1). If global placement results are 29 acceptable (i.e. suitable as a starting point for further processing), then flow proceeds ("OK"
202Y). If the global placement results are not acceptable, then flow loops back ("Not OK"
31 202N, "Repeat" 220, and "Revise" 202R) to repeat all or portions of the global placement.
32 Revised global placement processing (via "Revise" 202R) in ceitain embodiments includes 33 modifying any combination of the netlist, global placement commands and parameters, and 34 manufacturing technology (such as specifying a larger die, or a denser device fabrication process) based in part upon previous processing.

1 [0139] Subsequent to acceptable global placement, resources are reconciled.
according 2 to the global placement and manufacturing information ("Legalization" 203), resulting in 3 elimination of areas of oversubscribed resources. In certain embodiments modifications are 4 made to the global placement results (effecting "movement" of placed elements) thus producing a legalized placement. If legalization results are acceptable, then flow proceeds ("OK" 203Y).
6 If the legalized placement is not acceptable (or not computed), then flow loops back for 7 additional processing ("Not OK" 203N). In certain embodiments the additional processing is 8 based on previous processing, and may include repeating any portion of global placement 9 ("Revise" 202R via "Repeat" 220) and continuing onward, or repeating any portion of legalization ("Revise" 203R via "Repeat" 220), according to various usage scenarios and 11 embodiments.

13 [0140] After acceptable legalization, then nearly final (or "exact") locations and 14 implemeritations for the devices are determined ("(SDI) Detailed Placement"
204). Relatively small-scale adjustments are made to legalization. results, via any combination of placed element 16 movement and placed element implementation, according to embodiment. In certain structured 17 array embodiments, the placed element implementation includes morphing of selected devices to 18 functionally equivalent alternatives. If detailed placement results are acceptable, then flow 19 proceeds ("OK" 204Y). If the detailed placement is not acceptable (or not computed), then flow loops back for additional processing ("Not OK" 204N). In certain embodiments the additional 21 processing is based in part upon previous processing, and may include repeating any portion of 22 previous place and route functions and then continuing onward (such as via any of "Revise"
23 204R, "Revise" 203R, and "Revise" 202R by way of "Repeat" 220).

101411 Subsequent to detailed placement, delay paths are minimized ("Timing Closure"
26 205), in certain embodiments to meet user specified timing, in various ways according to 27 embodiment and/or user option or configuration. In certain embodiments the detailed placement 28 is analyzed and buffers (or buffer trees) are inserted in high fanout and timing-critical nets. In 29 some embodiments drivers are resized and optimized to meet maximum capacitance and/or required time constraints with respect to timing critical receivers. In some embodiments clock 31 networks are synthesized, while in other embodiments the clock networks are predefmed. In 32 either case the appropriate clock network elements are inserted into the netlist for clock 33 distribution and to meet clock skew constraints. Further according to embodiment and/or user 34 option or configuration,'ather timing closure driven optimizations are performed (see "Timing Closure", elsewhere herein). If the timing closure results are acceptable, then flow proceeds 36 ("OK" 205Y). If the timing closure is not acceptable, then flow loops back for additional 1 processing ("Not OK" 205N). The additional processing may include repeating any portion of 2 previous place and route functions, based in part upon previous processing and then continuing 3 onward (such as via any of "Revise" 205R, "Revise" 204R, "Revise" 203R, and "Revise" 202R
4 by way of "Repeat" 220). Note that in some embodiments flow loops back as a natural consequence of timing closure processing, rather than merely as a result of not-acceptable timing 6 closure results. For example, certain timing closure techniques call for repetition of previous 7 processing (such as one or more of "SDI Global Placement" 202, "Legalization" 203, and 8 "(SDI) Detailed Placement" 204), using various combinations of modified behaviors and 9 parameters, along with optional changes to the netlist and constraints, according to various embodiments.

12 [0142] After timing closure is complete (or considered "close enough"), the resultant 13 devices are wired together according to the resultant netlist ("Routing"
206), and con-esponding 14 interconnect is generated. If the routing results are acceptable, then flow proceeds ("OK"
206Y). Place and route processing is then complete ("End" 299), and results are available for 16 further use, such as any combination of analysis and mask generation ("Generate Fabrication 17 Data" 106 of Fig. 1, for example). If the routing results are not acceptable, then flow loops back 18 for additional processing ("Not OK" 206N). In certain embodiments the additional processing is 19 based in part upon previous processing, and may include repeating any portion of previous place and route functions and then continuing onward (such as via any of "Revise"
206R, "Revise"
21 205R, "Revise" 204R, "Revise" 203R, and "Revise" 202R by way of "Repeat"
220).

23 101431 Various combinations of place and route processing functions (such as "SDI
24 Global Placement" 202, "Legalization" 203, "(SDI) Detailed Placement" 204, "Timing Closure"
205, and "Routing" 206) may include reading and writing shared information (such as references 26 to "Working Data" 131, of Fig. 1). Examples of working data include netlists, constraints, 27 progress indicators, and other similar shared processing items. Various combinations of the 28 aforementioned place and route processing functions also may include receiving one or more 29 inputs specifying requested behaviors or processing (such as information from "Commands and Parameters" 130, of Fig. 1). Examples of commands and parameters include scripts specifying 31 iteration closure conditions, control parameters, goal descriptions, and other similar information 32 to guide processing. The commands and parameters may be provided via any combination of 33 scripts, command line inputs, and graphical user interfaces, according to various embodiments.

[0144] In some embodiments processing of one or more elements of Fig. 2 is optional, 36 or performed only for selected iterations though the illustrated flow. For example, timing 1 closure operations may be operative in a first processing mode where legalization and detailed 2 placement are skipped, and processing relating to timing closure is partially performed as part of 3 global placement. Alternatively the first processing mode may be viewed as global placement 4 operations being performed to a limited extent, then analyzed and further directed by timing closure operations (without legalization or direct placement), and then additional global 6 placement operations being performed. Eventually a second mode of processing may be entered 7 where legalization and detailed placement are performed, optionally followed by additional 8timing closure operating as in the first mode or operating in a manner specifically tailored to the 9 second mode (see "Timing Closure", elsewhere herein).

12 SIMULTANEOUS DYNAMICAL INTEGRATION (SDI) DIRECTED GLOBAL

[0145] Conceptually SDImay be understood as modeling each individual device of the 16 netlist as a node, or point particle, having an associated mass, position (or location), and 17 velocity. The nodes representing the devices of the netlist are coupled by and interact with each 18 other via attractive and spreading forces. The forces may include attractive forces representing 19 electrical connections between the devices (as specified by the netlist), and spreading forces modeling resource requirements versus availability (such as a density of logic gates needed 21 versus a density of logic gates on hand). The nodes and effects of the coupling forces are 22 simulated as evolving over time as governed by a system of coupled ordinary differential 23 equations using continuous variables, according to classical Newtonian mechanics (i.e. force 24 equals mass multiplied by acceleration, or F=ma). Thus locations of nodes (corresponding to device placements) evolve over time from initial positions to subsequent positions 26 (corresponding eventually to the global placement result for the devices).

28 [0146] More specifically, the independent variables in the dynamical system simulation 29 include configuration-space variables (position and velocity) of the nodes.
In certain embodiments the position and velocity representations are multi-dimensional quantities (two or 31 three dimensions, for example), according to usage scenario and embodiment.
Force terms in 32 the coupled equations of motion are related to any combination of the topology of the 33 connections of the devices, timing analysis of evolving device locations (placement), 34 obstructions, and region constraints (fixed and floating), according to embodiment. Force terms may also be related to any combination of partial node density, partial resource usage density, 36 viscous damping, energetic pumping, interconnect,congestion effect modeling, power or clock I distribution, and signal integrity representation, according to embodiment.
Force terms may 2 include any function of the independent variables, provided commands and parameters, and 3 other similar mathematical devices useful in managing numerical behavior of continuous time 4 integration of the system of nodes and forces.
6 101471 In certain embodiments the obstructions are represented as exclusion zones, and 7 arise as a result of architectural considerations, location-fixed (or predetermined) blocks (such as 8 large RAM arrays or IP elements), and other similar placement limiting conditions. In certain 9 embodiments the region constraints are represented as fixed, relative, or floating location requirements on selected devices of the netlist. Corresponding position requirements (such as an 11 initial position with no subsequent change during system simulation time) are imposed for the 12 corresponding nodes in the dynamical simulation. Various combinations of region constraints 13 (relating to integrated circuit floorplan specifications, for example) may be developed by any 14 combination of automatic techniques (by software, for example) and manual techniques (by users), according to usage scenarios and embodiments.

17 101481 Conceptually the system of coupled simultaneous differential equations is 18 operational in continuous variables. While it is envisioned that certain embodiments will 19 perform at least some of the integration according to true analog integration techniques, in which the state variables are actually continuous, in digital computer embodiments, the integration is 21 performed using digital integration techniques. Digital computers are limited to representing all 22 quanta with finite-precision variables and that continuous time integration may be implemented 23 on digital computers using "pseudo-continuous" numerical. approximation techniques, a.k.a.
24 "numerical methods." Even when implemented using finite-precision approximations, the "continuous variables" abstraction is a useful way to conceive and describe some of the 26 techniques described herein and to distinguish compared to other approaches using conceptually 27 discrete variables. Thus the term continuous as used throughout this disclosure should be 28 interpreted in accordance with the foregoing.

[0149] In digital computer embodiments, continuous state variables (including those 31 variables representing simulation time, mass, location, and velocity) are approximated as any 32 combination of single, double, or extended floating-point numbers. The continuous time 33 integration of the simultaneous coupled dynamical governing equations may be performed in 34 digital computer embodiments by any suitable digital integration technique, such as Runge-Kutta, predictor-corrector, leap-frog, and any similar technique adaptable to continuous multi-36 variable state space integration. In some embodiments the integration technique is chosen for 1 suitability based at least in part on adaptability to parallel processing (see "Computer System 2 Executing SDI-Directed EDA Routines", elsewhere herein).

4 [0150] The forces acting in the system provide coupling between the nodes and act to accelerate the nodes over time, resulting in movement of the nodes throughout the state-space 6 over time. A set of attractive forces (known as "net attractive forces") is modeled to represent 7 connectivity between the devices of the netlist, or more specifically between pins (i.e. terminals 8 of circuit elements) of devices. In some embodiments the net attractive forces are modeled as 9 individual springs between a pin of one device and a pin of another device, with every interconnection between any two pins being modeled as a corresponding spring.
Force 11 associated with each spring is computed according to Hooke's law (force is proportional to 12 distance between the pins). The net attractive force acting on each device is a vector sum of all 13 net attractive forces acting on all of the pins of the respective device.

[0151] In some embodiments the constant of proportionality used to calculate spring 16 force is identical for all springs. In some embodiments the constant of proportionality is 17 dependent on the fanout of a net (i.e.-the number of pins connected together). In some 18 embodiments relatively high fanout nets are considered to be one or more drivers providing a 19 signal to one or more loads.. Springs between the loads of the relatively high fanout nets are eliminated (while springs from drivers to loads are retained). In some embodiments springs 21- between drivers and loads have a different constant of proportionality than other springs.
22 Modeling of net attractive forces is not restricted to ideal springs, and may instead be based on a.
23 general linear or non-linear force model, according to various embodiments.

[0152] A set of spreading forces (known as "spatial spreading forces") is modeled 26 based on one or more macroscopic density fields. In certain embodiments the density fields are 27 computed based on analysis of metrics associated with respective devices corresponding to the 28 nodes (and their locations) in the dynamical system. The metrics may include any combination 29 of standard cell area (in, for example, standard cell flow processing), fabric resource consumption (in, for example, SAF flow processing), equivalent gate count, and other similar 31 functions .of node properties. In some embodiments the spatial spreading forces (see "Field-32 Based Force Components", elsewhere herein) are with respect to a density field based on 33 resource utilization of corresponding nodes in a local region. In some embodiments resource 34 utilization may be evaluated usirig an area averaging or summation of nearby devices or an equivalent-gate count rating (cost function) of spatially close devices.

1 [0153] In some embodiments a plurality of density fields are computed with respect to 2 a plurality of metrics. In some embodiments any combination of first, second, and third density 3 fields are computed with respect to first, second, and third categories of logic devices (such as 4 combinational logic devices, sequential logic devices, and total logic devices). In some embodiments each of a plurality of partial density fields is computed according to a set of 6 respective non-interchangeable morphing classes (such as combinational and sequential 7 morphing classes) associated with an underlying SAF. In some embodiments (such as selected 8 standard cell based design flows) the density fields are computed based wholly or partially on 9 device area. In some embodiments (such as selected structured array based design flows) the density fields are computed based wholly or partially on resource utilization as measured by 11 counts of the number of each type of resource needed to implement the function associated with 12 each device in the netlist.

14 [0154] Other attractive and spreading forces may also be included, according to usage scenario and embodiment. Floorplan constraints, or various region constraints, may be 16 expressed as attractive or spreading forces, or as potential wells (with a tendency "to retain nodes 17 in a region) or potential barriers (with a tendency to disperse nodes from a region), according to 18 usage scenario and embodiment. For example, boundaries of a die, or locations of input/output 19 (10) rings may be expressed as fixed constraints that are mapped to attractive forces acting on nodes having interconnect to the 10 ring. For another example, a selected region of the die may 21 be excluded from use (such as for yield improvement or noise reduction) by fixed or relative (i.e.
22 floating) constraints that are mapped to spreading forces acting on nearby or all nodes (see 23 "Exclusion Zones", elsewhere herein). In other embodiments or modes of operation, such 24 floorplan constraints may be implemented through coordinate clipping inside the integrator, thereby preventing the motion of devices into disallowed regions.

27 101551 User specified circuit timing constraints may warrant that certain pins in the 28 netlist be moved closer together to improve the performance of the design.
A corresponding set 29 of attractive forces between drivers and select loads is fed into the system as attractive forces with configurable binding strength.

1 Viscous Damping 3 [0156] Forces other than attractive and spreading forces between nodes or other 4 elements may also be accounted for. As an example, a viscous damping force may be included as a way to (a) compensate for the effect of numerical errors (potentially incurred by the time 6 integration and spatial differencing techniques used) contributing toward numerical heating, and 7 (b) change the ratio between kinetic and potential energy of the node distribution. The damping 8 serves to decelerate the ballistic motion of a node. One embodiment of such a force on a given 9 node is a term proportional to the negative of the node velocity, with the proportionality constant being equal to , the global coefficient of viscosity. The value of may be supplied by direct 11 manual input (by a user) or via automatic control, (under software control) according to 12 embodiment, to provide partial control of the node distribution as a whole.

14 [0157] While is a global constant, it may have a local effect, and thus in some embodiments other parameters are selected for manipulation to provide control of the node 16 distribution as a whole. For example, in some implementations a ratio of KE/TE, where KE is 17 the kinetic energy of the node distribution and TE is the total energy of the system, is a 18 convenient control parameter.

[0158] In some embodiments, the global viscosity coefficient is split into two terms, a 21 gradually drifting term and a dynamically calculated term. The gradually drifting term enables 22 the system to gradually adapt to time varying forces or parameter changes, while the dynamical 23 term prevents runaway acceleration on a per-timestep basis.

[01591 Each timestep the total effective is adjusted in response to normalized kinetic 26 energy (KE/TE) changes from a selected target value. In certain embodiments the adjustment to 27 is given by:

29 If KE/TE > target then:
dm = cdm 1 * ( (KE/TE / target) - 1 ) + cdm2 * ( (KE/TE / target) - 10 )^2 31 _eff = *( 1+ dm ) 32 *= (1 + <small adjustment>) 33 If KE/TE < target then:
34 dm = cdm 1 * ( (target / KE/TE) - 1 ) + cdm2 * ( (target / KE/TE) - 10 )^2 _eff = /( 1+ dm ) 36 /= (1 + <small adjustment>) I where:
2 double mu max = 1.e+8;
3 double cdm 1= 1.;
4 double cdm2 = 0.01; and Note that "double" refers to double-precision variables used in some embodiments.

7 101601 The <small adjustment> may vary with the relative difference between the 8 target and actual values of KE/TE, and tends to be small compared to 1. The term "mu_max"
9 limits to prevent numerical problems with a timestepper used for numerical integration. The quadratic term contributes little until KE/TE differs from the target by a factor of 10, and 11 quenches runaway conditions.

13 [0161] By splitting the calculation of into a purely dynamical term and a slowly 14 varying term, the system remains generally stable while retaining an ability to react quickly to energy spikes. Further, by using a constant during the course of the time integration, 16 performance may be enhanced, as operation counts are substantially reduced and adaptive 17 integrator timesteps (if relevant) may be allowed to increase.

19 [0162] In some embodiments a viscous damping proportionality constant is identical for all nodes in the system, while in other embodiments one or more distinct proportionality 21 constants may be employed. For example, in certain embodiments the viscous damping 22 proportionality constant is modeled as a scalar field of position and the value of the constant at 23 the position of each circuit device is computed. Moreover, in certain embodiments the scalar 24 field is analytically specified, and selectively includes a dependence upon the independent time variable. In other embodiments the scalar field is a derived quantity computed from other 26 numerical characteristics that may be evaluated for the time-evolving simulation. Additionally, 27 the viscous force is not limited to being proportional to the velocity of a node. In certain 28 embodiments the viscous force instead follows a functional form based on other selected state of 29 the system.
31 [0163] The aforementioned forces are merely representative examples.
Forcing terms 32 may be associated with interactions between one or more nodes, and between one or more fixed 33 (or immovable) elements. Forcing terms may also be associated with fields that may be 34 dependent in some way upon one or more nodes, or with fields that are independent of nodes.
These and all similar types of forcing terms are contemplated in various embodiments.

1 101641 Thus forces on the nodes of the system include direct interactions with 2 topological neighbors (according to the netlist), collective interactions involving numerical 3 constructs associated with temporal bulk properties of the node distribution, and with 4 architectural features of the implementation. The result of the combination forces impinging on the system nodes is a complex dynamical interaction where individual nodes meander through.
6 the placement domain under the influence of the forces and wherein the forces vary continuously 7 with the motion of all nodes in the netlist. The motion exhibits both chaotic and coherent 8 behaviors. The motion of a given node may appear chaotic in the sense that the node trajectory 9 may meander back and forth as a result of connections to other nodes. Yet the system may also exhibit coherent (or collective) motion in the sense that tightly connected nodes will tend to 11 move in bulk and remain in proximity to topological neighbors even as the tightly connected 12 nodes collectively move far from respective starting points.

14 101651 The integration of the governing equations of motion proceeds using standard techniques of numerical integration. (See for example, a reference describing numerical 16 integration.) As an example, the next several paragraphs assume the use of a Runge-Kutta 17 integrator.

19 [0166] The computation of the forcing terms is referred to as "computing the derivatives". Differentiation is denoted with respect to time by '(prime), so that dx/dt = x', 21 dZx/dt2 = x", and so forth. The following variables are introduced to set, up the governing 22 equations for solution by numerical integration:
23 vX,; = (xi)' 24 vy.i = (YY
The subset of the system of equations relating to the ith node (for a two-diinensional layout 26 application) is:
27 (xi)' = v xi 28 (yi)' = vy,;
29 (vX,i)'_ F,i (vy,;)' = F,.,;

32 101671 Thus the system of simultaneous second order differential equations is 33 transformed to a (larger) system of simultaneous first order differential equations, where the 34 = right hand side of each equation is the derivative of the respective left hand side. Conceptually computation of a derivative per se is not required (unless some element of the forcing terms is 1 itself expressed as a derivative of something else), but rather the right hand sides of the 2 equations are the derivatives.

4 [0168] There is time-varying complexity in the behavior (character of motion) of the moveable nodes in the netlist when the forcing terms are time varying. In some embodiments a 6 time varying timestep is used to preserve numerical accuracy and to continue processing until 7 convergence criteria (error limits) are met during each timestep in the integration.

9 [0169] The integrator accepts as input a specification of a desired timestep, and then processes the timestep in two ways: once directly, and once as two half-steps.
If the results are 11 not close enough as determined by a specifiable error-norm, then the target timestep is reduced .12 until it is possible to perform the one-step plus the two-half-steps approaches with results within 13 an error norm. Besides new coordinate values for the independent variables, the integrator also 14 returns the length of the timestep just taken and the advised length for the next timestep. Thus during periods of laminar motion when numerical convergence is readily achieved, the timestep 16 trends longer on successive calls to the integrator. But in periods of turbulent or chaotic motion, 17 where convergence requires more effort, the timesteps become as small as needed to ensure the 18 accuracy of the integration.

[0170] Fig. 3A is a flow diagram illustrating selected details of an embodiment of 21 global placement according to SDI modeling and simulation, such as operations referred to 22 elsewhere herein ("SDI Global Placement" 202, of Fig. 2, for example).
Overall the flow 23 includes various functions to enable and perform a series of dynamical simulations based on 24 Newtonian mechanics on a system representing the netlist and associated design constraints and targets. The simulations use SDI techniques to orchestrate the interactions between particles 26 (representing netlist devices). The SDI techniques make use of fields that are calculated as 27 functions of the particle positions. The functions include determining a set of nodes 28 corresponding to the.devices in the netlist, initialization of state variables (including mass, 29 location, and velocity associated with each node), adjusting forces , and evolving the resultant system of simultaneous dynamical governing equations forward in time via integration. The 31 flow is repeated beginning at the adjustment processing until a suitable result is available, or it is 32 determined that a suitable result will not become available without further processing outside of 33 the illustrated flow.

101711 Processing begins ("Start" 301) with receipt of pre-processed information, in 36 certain embodiments as data structures representing the netlist and the associated devices and 1 connectivity ("Pre-Process" 102, of Fig. 1, for example). Further data structures for representing 2 a system of nodes and forces are created and initialized ("Determine Nodes and Forces" 302), 3 with each node in the system corresponding one-to-one with each device of the netlist, and with 4 each node having a corresponding set of forces acting on it. State variables for the dynamical simulation are initialized ("Initialize State Variables" 303), including determining starting values 6 for mass, location, and velocity state variables for each node. The initial node locations 7 correspond to initial placements of the corresponding netlist devices (see "Initial Placement", 8 elsewhere herein). Initial force values are also determined.

101721 Large-scale goal-driven modifications to the forces in the system are then made 11 ("Macro Adjust Forces" 304). In some embodiments one or more attractive forces are over- or 12 under-weighted for periods of time, and one or more spreading forces may also be reduced or 13 increased in relative proportion to the attractive forces. For example, a "condensing" phase may 14 inflate attractive forces and deflate spreading forces, and an "extending"
phase may deflate attractive forces and inflate spreading forces. Operations associated with the macroscopic force 16 adjustment track simulation time and change the forces according to condensing and extending 17 phases. During the phases of system evolution, the coordinates of individual nodes continue to 18 evolve separately based on the governing equations for each individual node. Consequently, the 19 behavior of any individual node may vary from the bulk behavior ofthe collective system.
21 [0173] Other large-scale force adjustments may also be made, according to 22 embodiment, including entirely removing one or more forces for a period of simulation time, 23 and introducing a new force. The removal (or introduction) of a force may be at a 24 predetermined point in simulation time, at a point in simulation time determined by computation of a test condition, any similar mechanism, and/or at the discretion of a human operator of the 26 system, according to various embodiments. In certain embodiments the removal (or 27 introduction) of a force is gradual, and the rate of change of the removal (or introduction) may 28 vary over simulation time or be constant, according to implementation. In some embodiments 29 the macroscopic force adjustments are in response to various force-control instructions and input data (such as represented conceptually by "Commands and Parameters" 130, of Fig. 1).

32 [0174] Large-scale goal-driven modifications to the effects of masses in the system are 33 then made ("Macro Adjust Masses".305). In certain embodiments the effects of masses are 34 modified during phases where node densities are being adjusted to more evenly distribute resource consumption, or to more evenly match resources needed with resources available. For 36 example, in usage scenarios including global placemeiit of devices according to SAF tiles, 1 macroscopic mass adjustments may be made to "encourage" cells in over-subscribed regions to 2 "move" to less subscribed regions (see "Depletion Weighting", located elsewhere herein). As in 3 the case of macroscopic force adjustments, macroscopic mass adjustments may be varied 4 according to simulation time phase, and may be gradually introduced (or removed) over the course of system evolution throughout simulation time. In some embodiments the macroscopic 6 mass effect adjustments are in response to various mass-control instructions and input data (such 7 as represented conceptually by "Commands and Parameters" 130, of Fig. 1).
Note that adjusting 8 the effects of mass, in certain embodiments, is with respect to densities and forces brought about 9 by the masses, while the momentum of each of the nodes having adjusted mass effects remains unchanged.

12 [01751 A dynamical simulation of the nodes (as point particles) according to the mass, 13 location, velocity, force, and other state variables is performed ("SDI
Simulation" 306) for some 14 amount of system simulation time. The time may be a predetermined interval, dependent on specific completion criteria (as provided to the SDI simulation), and any similar interval 16 specification-scheme, according to various embodiments. At the end of the simulation time the 17 system arrives at a new state. In certain embodiments the new state includes new locations for 18 one or more of the nodes, and the new locations of the nodes are interpreted as corresponding to 19 new locations for the devices being placed.
21 101761 According to various embodiments, any combination of the system variables 22 (including simulation time and node mass, location, and velocity) and corresponding 23 interpretations of the system variables in the context of the netlist (including device location and 24 density) are examined to determine if portions of the flow should be repeated ("Repeat?" 307) or if flow is complete ("OK Result?" 308). If repeating the flow would likely improve results, and 26 no other end condition has been met, then flow loops back ("Yes" 307Y) to macro adjustment of 27 selected forces and masses. In some embodiments configurable settings are adjusted prior to or 28 in conjunction with force and mass macro adjustments (such as settings associated with 29 "Commands and Parameters" 130, of Fig. 1). If the global placement is close enough ("No"
307N), then flow is complete ("OK" 202Y) and processing continues to legalization (see Fig. 2).
31 If there would likely be no benefit in iterating the global placement ("No"
307N), and the results 32 are not acceptable, then flow is also complete ("Not OK" 202N), but subsequent processing then 33 includes one or more revisions (see Fig. 2).

101771 Tests to determine if the flow is to be repeated may be made for a predetermined 36 end condition, a predetermined rate of change, other similar criteria, and any combination 1 thereof according to assorted implementations. In some embodiments the flow is not repeated 2 even if improvement is likely possible (for example if an interval of simulation time has 3 expired).

[0178] Determinations ("Repeat?" 307 and "OK Result?" 308) are according to any 6 combination of automatic (software program) and manual (human user) techniques, according to 7 various embodiments. For example, an automatic technique may include software determining 8 if the most recent iteration is a significant improvement over a previous iteration. If so, then 9 repeating the flow is beneficial. As another example, a manual technique may include a user observing the time-evolving locations of devices and noticing that further improvements are 11 possible and that repeating the flow would be beneficial. Another manual technique may 12 include a user determining that the placement as changing over time is "stuck", perhaps due to 13 some incorrectly specified constraints, and that additional iterations of the global placement flow 14 are not likely to be beneficial unless modifications are made to the constraints.
16 [0179] Any portion (or all) of global placement may be performed according to.various '17 techniques, in addition to the aforementioned SDI directed technique. The additional techniques 18 include simulated annealing, objective minimization techniques such as conjugate-gradient, 19 chaotic processing, and other similar mechanisms to provide approximate or "close enough"
device coordinates, according to various embodiments.

23 Initial Placement [0180] Fig. 3B is a flow diagram illustrating selected details of an embodiment of 26 initial placement operations for global placement, such as selected operations performed while 27 initializing state variables (as in "Initialize State Variables" 303 of Fig. 3A). Processing begins 28 ("Start" 310) and then one of a plurality of starting location definition techniques is chosen 29 ("Select Technique" 310A), based, in some embodiments, on instructions provided by a user (such as information from "Commands and Parameters" 130, of Fig. 1). A first technique 31 determines an initial placement based on a placement performed in the past ("Prior Solution"
32 311). A second technique formulates an initial placement based on randomization ("Random"
33 312). A third technique develops an initial placement according to any of a number of other 34 mechanisms ("Selected Algorithm" 313), chosen by any combination of software and user input.
The chosen technique is then performed and processing is complete ("End" 314).

1 Mass Determination 3 [0181] In some embodiments, determination of mass (as in "Determine Nodes and 4 Forces" 302, for example) is dependent on the design flow or implementation context (such as application specific, structured array, mask-definable gate array, mask-programmable gate array, 6 FPGA, and full custom). For example, in a standard cell context, the mass of a node may be 7 computed as a function (such as a linear function) of area occupied by the corresponding device 8 in the netlist. For another example, in a structured array context, the mass of a node may be 9 computed with respect to consumption of resources provided by the structured array, or with respect to local availability or scarcity of the resources, according to the corresponding device as 11 implemented by the resources. For another example, in an FPGA context, the mass of a node 12 may be computed according to consumption of Look Up Table (LUT) resources, or similar 13 switching and/or routing resources.

[0182] In some embodiments the spatial spreading forces (see "Field-Based Force 16 Components", located elsewhere herein) are with respect to a density field based on resource 17 utilization (such as an area averaging or summation of nearby devices or an equivalent-gate 18 count.cost function of spatially close devices) of corresponding nodes in a local region. In some 19 embodiments first and second density fields are computed with respect to first and second categories of logic devices (such as combinational logic devices and sequential logic devices).

23 Field-Based Force Components [0183] In some embodiments various elements of the spatial spreading forces are with 26 respect to one or more resource usage based density fields, or other types of density fields. The 27 density fields are managed independently, and may include any combination of all nodes, 28 combinational nodes, and sequential nodes. Computation of density fields and resultant 29 spreading forces conceptually includes calculating local densities according to a discrete grid, computing density fields, allocating field strengths according to the discrete grid to system 31 nodes, and calculating resultant spatial spreading forces acting on the system nodes. In some 32 embodiments the discrete grid is a uniform (or non-variable) grid, and in some embodiments the 33 grid is a non-uniform (or variable) grid, the grid being implemented according to architectural 34 considerations. Local density calculation includes summing resource usage computed in continuous spatial variables (i.e. node location and mass) according to the discrete grid and 36 digitally filtering the resultant gridded scalar field. The local density calculation includes special 1 accounting for edges of the grid. The digital filter result is suitable for processing by a field 2 solver. Density field computation performed by the field solver includes determining density 3 fields (given density values on the grid). and digitally filtering the result. Allocating field 4 strengths includes interpolating field strengths to nodes (in continuous location space) while accounting for edges of the grid. Repulsive (or spreading) forces are then computed according 6 to the allocated field strengths.

8 [0184] In some embodiments the grid is a unit grid, and the region enclosed by adjacent 9 grid lines is termed a "cell". The grid may be two-dimensional (i.e. x and y) or the grid may be three-dimensional (i.e. x, y, and z), according to implementation technology and other design-11 flow related parameters. In some embodiments resource,usage density is proportional to the 12 respective mass of each node, and the mass is in turn directly proportional to a "gate rating" that 13 is a measure of relative cost of implementing a logic function corresponding to the node. In 14 some embodiments the gate rating of the node is measured in "gate-equivalents" commonly associated with design-flow device selection criteria.

17 [0185] Fig. 3C is a flow diagram illustrating selected details of an embodiment of 18 density field based force component computation, in a specific context of resource usage 19 densities expressed in certain embodiments as mass that is proportional to gate rating. The operations of the flow are performed for each of a possible plurality of density fields, each field 21 having separate accounting. Flow begins ("Start" 330), and proceeds to determine local resource 22 usage density by accumulating system node masses with respect to a scalar field organized as a 23 regular grid.(in the illustrated embodiment) according to the SDI
simulation spatial field 24 ("Accumulate Gate Densities" 331). The grid is finite in'size, completely covering space in the system simulation corresponding to the area available for the devices of the netlist (either an 26 entire die or a portion thereof). The grid is extended, via one or more guard grid locations (or 27 grid cells) one or more units around each border of the area (the boundaries of the area) to more 28 accurately and efficiently model edge effects. The guard grid elements are then included in the 29 gate density calculation ("Fold Guard Cell Contributions" 332). The single-unit guard-cell buffer is used in some embodiments employing two and three-point allocation/interpolation 31 schemes, and a multi-unit guard-cell buffer is used in some embodiments having higher order 32 allocation schemes.

34 [0186] The resultant density values are then further optionally processed ("Digitally Filter Density" 333), according to embodiment, to smooth variations caused by grid element 36 representation inaccuracies. Density values for guard grid elements are then determined 1 ("Calculate Density Guard Cell Values" 334) to enable straightforward and efficient field solver 2 implementations. Density field computations ("Solve Gate Fields" 335) are then performed by 3 the field solver, determining the field value at each point as equal to minus the gradient at the 4 point (i.e. field =- Grad(n)). Any field solution technique applicable to calculating a derivative 5' with respect to a discrete grid may be used, such as a second order fmite difference formula, or 6 any other suitable technique, according to embodiment. In some embodiments the second order 7 finite difference formula is given as the derivative at grid point "i", and is equal to one-half the 8 quantity equal to the difference of the values at adjacent grid points along one of the orthogonal 9 dimensions (i.e. field(i) =(density(i+1) - density(i-1)) / 2). Derivatives are calculated for each orthogonal dimension of the system node space (two or three dimensions, according to 11 embodiment). The result is a gridded vector field for each gridded density (such as all, 12 combinational, and sequential).

14 [0187] In some embodiments vector field values are stored in a data structure as a tuple.
Each member of the tuple corresponds to a value associated with an orthogonal dimension of the 16 'vector field, and there is a tuple associated with each grid point. In some embodiments vector 17 field values are stored separately as scalar fields, according to each vector field orthogonal 18 component. Each respective scalar field represents all grid points. In some embodiments vector 19 field values are stored according to other arrangements that are mathematically equivalent to tuples or scalar fields. In addition, vector fields may be stored in various combinations of tuple, 21 scalar field, and other forms, according to embodiment. The representatiori employed for the 22 vector fields may also change during processing to enable more efficient computations. Further, 23 during processing, any portion of vector field representations may be stored in any combination 24 of processor cache memory (or memories), processor main memory (or memories), and disk (or other similar non-volatile long-term) storage, according to usage scenario and implementation.

27 [0188] The gridded vector fields are then processed according to a digital filter 28 ("Digitally Filter Fields" 336). In some embodiments the filtering of the gridded vector fields is 29 according to operations identical, except for edge processing, to the smoothing performed on density values (as in "Digitally Filter Density" 333). The difference between the filter 31 operations'is that for density filtering even parity is used when processing the boundaries, while 32 for field filtering even parity is used for field components parallel to the boundary and odd parity 33 is used for field components perpendicular to the boundary. The difference in parity accounts 34 for the differentiation operation performed between density and field domains, such that parity is reversed from even (for density) to odd (for field) when differentiation is directed into a 36 boundary. For a (scalar) density, even parity means values associated with guard grid points are 1 added to interior grid points. For a (vector) field, even parity means the guard grid points are 2 equal to respective closest inner grid points for, and odd parity means that the guard grid points 3 are equal to the negative of respective closest inner grid points ("Calculate Field Guard Cell 4 Values" 337). Thus the average field directed into (or out of) a boundary vanishes at the boundary. Assigning guard point field values enables subsequent efficient computation of field 6 values in the continuous location representation of nodes from the discrete field values 7 ("Interpolate Gate Fields to Nodes" 338). Corresponding forces may then be calculated 8 according to node field values and node masses. Processing is then complete ("End" 339).

[0189] Fig. 3D is a flow diagram illustrating selected details of an embodiment of gate 11 density accumulation, such as operations referred to elsewhere herein ("Accumulate Gate 12 Densities" 331, of Fig. 3C, for example). Conceptually mass associated with each node 13 (represented in continuous location space) is allocated to a local neighborhood portion of the 14 discrete grid points. Guard grid points are added around the boundary of the grid to efficiently process edge conditions. In some embodiments a two-point linear spline, also known as a 16 Cloud-In-Cell (CIC) or area weighting technique, is used to allocate the mass of each node to 17 four neighboring grid points. In some embodiments a three-point spline technique is used to 18 allocate node mass to nine neighboring grid points.

[0190] More specifically, flow begins ("Start" 340) by initializing accumulation 21 variables (such as to zero), and then a check is made to determine if processing is complete for 22 all nodes in the simulated system ("Iterated Over All Nodes?" 341). If so, ("Yes" 341Y), then 23 gate field interpolation processing is complete ("End" 345). If not, then a first (and subsequently 24 a next) node is selected for processing, and flow continues ("No" 341N).
Spline coefficients are then determined for the node ("Determine Spline Weights" 342), based on distances from the 26 respective node to each field grid point (see the discussion of Fig. 3E, elsewhere herein).

28 [0191] After all of the spline weights for all of the grid points have.been calculated, a 29 check is made to determine if all fields the respective node contributes to have been processed ("Iterated Over all Fields" 343). If so ("Yes" 343Y), then processing loops back to check if all 31 nodes have been processed. If not, then a first (and subsequently a next) field is selected for 32 processing, and flow continues ("No" 343N). The effect of the node is then accumulated to the 33 respective field array at each of the grid points currently subject to interpolation ("Apply Node 34 Weight to Field Array" 344). Processing then loops back to determine if all fields have been processed.

1 101921 Fig. 3E is a conceptual diagram illustrating an embodiment of two-point 2 interpolation of node mass to grid points, as performed during mass accumulation (such as 3 "Determine Spline Weights" 342, of Fig. 3D). Boundary 394 is shown to represent edges of the 4 system simulation space (and corresponding edges of an integrated circuit region or die).
Several points of the discrete grid are illustrated: interior point I, 381, boundary points B, 371, 6 B2 372, and B3 373, and guard points G, 386, G2 388, and G3 389. Mass from node N, 375 is 7 shown accumulating to four grid points (G,, GZ, G3, and BZ), according to distance along 8 orthogonal dimensions of the system simulation location space (Sx, 390 and Sy, 392).
9 Conceptually grid points B2 and G, together receive (1-5x,) of the mass of N,, while grid points GZ and G; together receive Sx, of the mass of N,. More specifically each dimension is processed 11 in a geometric fashion, so the total mass contribution from N, to B2, for example, is (1-6x,) * (1-12 Sy,), and so forth. As illustrated in the figure, 6x, is the projected distance along the x-axis from 13 B2 to N,, and similarly for Sy, with respect to the y-axis, B2, and N,.

101931 The figure also illustrates mass allocation of node N2 376 to four neighboring 16 grid points (B,, B2, B3, and I,), none of which are guard points. The mass contribution from N2 17 to point B2 is additive with the mass contribution from N, to B2. Also, there may be any number 18 of other nodes (not illustrated) within the same grid cell as either of nodes N2 and N,, and 19 masses from the respective nodes are accumulated in the same manner as illustrated for Nz. and N,.

22 [0194] Fig. 3F is a conceptual diagram illustrating an embodiment of three-point 23 interpolation of node mass to grid points, as performed during mass accumulation (such as 24 "Determine Spline Weights" 342, of Fig. 3D). The figure is representative of operations similar to Fig. 3E, except the node being processed according to mass accumulation affects masses 26 accumulating for nine nearest-neighbor grid points (Bo 370, B, 371, B2 372, B3 373, B4 374, 14 27 384,13 383, IZ 382, and I, 381). The formula representing accumulation to a point (such as I,) is 28 implementation dependent.

[0195] Fig. 3G is a conceptual diagram illustrating an embodiment of applying guard 31 grid point masses to interior grid points, such as operations referred to elsewhere herein ("Fold 32 Guard Cell Contributions" 332 of Fig. 3C, for example). The elements and representations are 33 similar to Fig. 3E. In a first stage of processing, contributions of "right-hand column" guard 34 elements (Gz 388, G3 389, and G4 390) are summed, or "folded" into corresponding guard and interior elements of the adjacent column (G, 386, B2 372, and B3 373, respectively), as 36 suggested conceptually by curved arrows 396. In a second stage of processing, contributions of 1 "top row" guard elements (Gi 386 and Go 385) are summed to (or folded into) corresponding 2 interior elements of the adjacent row (B, 371 and B2 372, respectively), as suggested 3 conceptually by curved arrows 395. The summation processing corresponds to even parity.
4 Similar processing is performed for the other two edges of the region.
6 [0196] Fig. 3H is a flow diagram illustrating selected details of an embodiment of 7 digital density filtering, such as operations referred to elsewhere herein ("Digitally Filter 8 Density" 333, of Fig. 3C, for example). Conceptually each density grid is filtered, alone or in 9 combination with other density grids, according to embodiment. Filtering each density grid may include filtering all of the elements of the respective grid, although in certain embodiments 11 filtered elements may be selected. Applying the digital density filtering process includes 12 determining edge conditions for each grid element, "smoothing" temporary copies of elements 13 of the grid, and replacing the original grid elements with the smoothed elements.

[0197] More specifically, flow begins ("Start" 350) and a working copy of grid 16 elements is created. Then additional elements are added "outside" the spatial boundaries of the 17 temporary grid ("Populate Guard Cells" 351). The added guard elements enable more useful 18 smoothing results in some usage scenarios. Then a local averaging is performed on elements of 19 the temporary grid, including the guard elements ("Apply Spreading Function" 352). In some implementations the spreading function reduces numerical artifacts associated with short-21 wavelength density fluctuations. In some usage scenarios the numerical artifacts arise due to 22 inaccuracies in representation of a grid or grid elements.

24 [0198] Any combination of smoothing functions may be used, according to various embodiments, including relatively conservative and relatively more aggressive techniques. In 26 some embodiments a binomial weighting function implementing a 1-2-1 spreading (with a 27 subsequent division by four to preserve total mass) over spatially neighboring grid element 28 values is used. In some embodiments the binomial weighting is performed in any number of 29 orthogonal dimensions, up to and including the maximum number of spatial dimensions represented in the SDI simulation. After completing the spreading processing, the temporary 31 elements are used to replace the original array elements ("Copy to Original Array" 353) and 32 flow is complete ("End" 354).

34 [0199] In some embodiments all of the filtering operations for all of the elements of all of density grids are completed before any of the associated temporary results replace the original 36 elements, as the original elements are required as inputs to respective filtering computations for 1 each grid. Alternatively, temporary copies of all of the original elements may be made, and the 2 copying may occur as filtering result are made available. Other similar arrangements of original 3 and temporary element management with respect to filtering computations are envisioned.

[0200] As mentioried elsewhere herein, processing according to the illustrated flow is 6 entirely optional, according to embodiment. In addition, in some embodiments multiple 7 iterations of the flow may be performed, in some usage scenarios using varying filter functions.
8 Consequently zero or more iterations of the illustrated flow are performed (the iterations are not 9 explicitly shown), according to application requirements and implementation.
11 [0201] Fig. 31 is a flow diagram illustrating selected details of an embodiment of 12 interpolating gate fields to nodes,.such as operations referred to elsewhere herein ("Interpolate 13 Gate Fields to Nodes" 338, of Fig. 3C, for example). Conceptually field components calculated 14 according to the (discrete) grid are mapped onto the continuous spatial coordinates of node 15 locations. In some embodiments the mapping is according to the node mass'accumulation (such 16 as summations performed in "Accumulate Gate Densities" 331). In other words, if an N-point 17 spline technique is used to accumulate densities, then an N-point spline technique is also used to 18 interpolate fields to nodes, and the value of N is the.same for both techniques. Using matched 19 spline weights during accumulation and interpolation prevents "self-forces"
that would otherwise arise and spontaneously propel a node inconsistently with forces acting on the node.

22 102021 More specifically, flow begins ("Start" 360) by initializing node force values 23 (such as to zero), and then a check is made as to whether processing is complete for all nodes in 24 the simulated system ("Iterated Over All Nodes?" 361). If so, ("Yes" 361Y), then gate field interpolation processing is complete ("End" 365). If not, then a first (and subsequently a next) 26 node is selected for processing, and flow continues ("No" 361N). Spline coefficients are then 27 determined for the node ("Determine Spline Weights" 362), based in part on user input in some 28 embodiments (such as those from "Commands and Parameters" 130, of Fig. 1).
In some 29 embodiments the user input is chosen to drive balancing corresponding device distribution throughout an integrated circuit die.

32 [0203] After all the spline weights for the respective node have been determined, a 33 check is made to determine if all fields affecting the respective node have been processed 34 ("Iterated Over all Fields" 363). If so ("Yes" 363Y), then processing loops back to check if all nodes have been processed. If not, then a first (and subsequently a next) field is selected for 36 processing, and flow continues ("No" 363N). The force contributed according to the respective 1 field is accumulated with forces associated with other fields ("Sum Field Contributions to Force 2 on Node" 364). The accumulation is according to each orthogonal spatial dimension associated 3 with force modeling (i.e. x and y for two-dimensional systems and x, y, and z for three-4 dimensional systems). Flow then loops back to determine if all fields have been processed.

7. Depletion Weighting 9 [0204] The effect a node has on local density and resultant forces may be "artificially"
increased (or decreased) to expedite nodes moving to more satisfactory placements more 11 quickly. Local density modification may be considered to be a result of manipulating a 12 weighting associated with the mass of one or more nodes, and is referred to as depletion 13 weighting. In other words, depletion weighting is a technique that may be used to drive the 14 system to the point of legality in an SAF flow via dynamical means. By providing a dynamical solution to the problem, a higher quality result may be obtained iri some usage scenarios. In 16 certain embodiments depletion weighting operates by attaching a modifier to the density 17 contributed by a node and the expansion field force acting upon it.

19 102051 In some embodiments an expansion field without depletion weighting is used.
In some embodiments an expansion field with depletion weighting is used. In some usage 21 scenarios the depleting weighting improves any combination of actual node resource footprint, 22 block footprint, and block capacity. Iin some usage scenarios the depleting weighting results in 23 nodes being driven apart only as far as necessary to achieve legality.

102061 In certain embodiments the depletion weight is calculated from a weighted sum 26 of the differences between the available resources and the node resource footprint onto a 27 quantization block, i.e. the amount of resource depletion caused by presence of the node in its 28 current state. The depletion weight acts as a simple linear weight modification to both the 29 density contributed by the node (in accumulation processing phases) and force acting on the node (in interpolation processing phases), and dependencies computed as:
31 dpwt = (1 + m)^pdpwt 32 where pdpwt is the power-law configuration parameter (that in certain embodiments defaults to 33 0, i.e. no modification), and the modifier "m" is as defined below. There is in addition a linear 34 term and configuration parameter cdpwt (that in certain embodiments defaults to 1, i.e. no modification) that in some usage scenarios enables improved results compared to the power-law 36 form alone.

2 [0207] The weights are computed differently if the quantization block is depleted in 3 any one of the resources required for the node. For example; a node may be oversubscribed in 4 only a single resource, but undersubscribed for others, leading to no net result unless resources are considered individually. Thus, if any resource appears depleted with respect to requirements 6 for a node, then only the depleted resources are considered. In some usage scenarios the node is 7 thus "coerced" out of a quantization block by depletion weighting related expansion forces.

9 [0208] The following equations are used when there is depletion for at least one resource. Nomenclature:
11 f a node footprint for atom (a) 12 b_f a block fooprint for atom (a) 13 b_c_a block capacity for atom (a) 14 For overfull (i.e. depleted) quantization blocks, the modifier m is given by:
m= cdpwt * sum_a { f a*(b_f (bb_c_a) / b c_a }
16 where only terms with (b_f a- b_c_a) > 0 are considered, sum_a indicates a sum over all values 17 of iteration variable "a", and the term atom refers to a slot in an underlying SAF. The modifier 18 ensures that (a) resources that are more limited are given higher weight, and (b) nodes 19 possessing multiple depleted resources have higher weight.
21 102091 For the case of no depletion, the.modifier m is given by:
22 m = sum_a { f a*(b_f (bb_c_a) / b_c_a / b c_a }/ sum_a { f a/ b_c_a }
23 where (compared to the depleted block case) additional terms serve to map the amount of 24 depletion onto the range [-1,0] (resulting in a weight in the range [0,1]).
Thus m=-1 is the minimum when the block is completely empty and m>0 when the block is full. In some 26 embodiments depletion zones may be treated differently from one another.

28 [0210] In some embodiments a simpler normalization multiplier is used, i.e.
29 1/sum_a{f a}, having the effect of treating all depletion zones equally.
31 102111 In some embodiments where depletion zones are treated differently from one 32 another, depletion weighting tends to reduce density contributed by nodes that "fit" and to 33 increase density for nodes that "don't fit". Also, nodes that fit tend to be affected by weaker 34 expansion forces and nodes that don't fit tend to be affected by stronger expansion forces. Thus the net effect of the depletion weighting is that nodes that easily fit contribute a smaller density 36 and are affected by a lesser force from the expansion fields, but nodes that don't fit contribute a 1 larger density and are affected by a stronger force. The variation in forces tends to contribute to 2 forward progress in several ways. The density differential between nodes that are fitting and 3 those that are not creates a situation where the system naturally (thermodynamically) evolves to 4 a lower energy state, where everything fits. Also, the force differential provides a direct dynamical mechanism to cause non-fitting nodes to leave an overfull block (as a result of the 6 density surplus and the attendant local expansion field) before other nodes get a chance to leave 7 the block.

9 [0212] In some embodiments a depletion weight technique calculates the node depletion weight at each of the nearest neighbor grid points used in the accumulation and 11 interpolation, so that nodes near a block boundary are subject to forces due to the inclusion of 12 the node in the neighboring block as well the bock the node is included in.
In certain usage 13 scenarios this prevents nodes from oscillating (or "sloshing") between blocks when there is 14 likely no benefit to be gained from the oscillation.
16 102131 The induced per-block expansion field tends to drive non-fitting nodes towards 17 the boundary where they may tend to cluster temporarily if the neighboring block does not have 18 the capacity to accept them. The cluster may be, however, a transient effect. Nodes that are 19 bunched near the edge of a block either slide along the edge until reaching an accepting block on either side, or hover at the edge until conditions in the nearest neighboring block become 21 favorable for transit.

24 Exclusion Zones 26 [0214] In some embodiments various regions, or exclusion zones, may be defined that 27 are not allowed to include any morphable-devices, .any placed elements; or any elements of 28 certain types, according to various usage scenarios. During later stages of global placement 29 iterations, exclusion zones may be processed to provide gradually growing regions of higher density fields that result in repulsing forces that tend to expel nodes from the exclusion zones.
31 In certain embodiments the exclusion zones "grow" as simulation time moves forward, starting 32 out as point particles (like nodes), as miniature representations of the desired exclusion zone (the 33 miniature having an overall shape and aspect ratio equal or nearly equal to the desired exclusion 34 zone), or as two-dimensional lines, according to various usage scenarios.
Subsequently the starting representation evolves into an ever-growing object until the object matches the desired 36 exclusion zone in size and location. Similarly exclusion zones specified as strips across the 1 entire area being placed and routed begin as an exclusion line and grow over simulation time 2 into an exclusion area equal in dimension and location to the required exclusion zone.

4 [0215] Exclusion zones (also referred to as "xzones") are a way to model architectural entities that explicitly prohibit inclusion of all non-qualifying node (or corresponding device) 6 types, while preserving the SDI-based numerical model. In certain embodiments all adjacent 7 xzones are collapsed into a single xzone, to simplify treatment..

9 102161 In some embodiments simulation proceeds according to the laws of motion defmed elsewhere herein, ignoring xzones, allowing the netlist a relatively large amount of time 11 for detangling. Once the nodes are suitably spread, a transition is made to "exclusion mode"
12 where the xzone constraints are obeyed.

14 [0217] A first technique to manage the transition is to explicitly move nodes out of the way, starting from the center of the exclusion zone and continuing outward. In some 16 embodiments the outward processing is gradual to reduce disruption caused by spatial shifting of 17 the nodes. The center of the xzone and moving xzone boundaries are defined to push nodes in a 18 desired direction, i.e. in the direction of accessible final placement states. For exclusion zones 19 that are in the form of a stripe along the entire chip area, nodes are moved to one or both sides as appropriate. For exclusion zones that are in the form of isolated rectangles, the nodes are moved 21 in a ray from the center point to the affected node, to spread out the distribution in an isotropic 22 manner.

24 [0218] A second technique is to apply an artificial density enhancement to the area inside the exclusion zone as it slowly expands. In this technique, twice the average density on 26 the xzone boundary is imposed in the interior of the xzone during transition. This provides a 27 dynamical shove against the nodes in advance of the approaching barrier.

29 [0219] After the xzone transition is complete, simulation continues as during the xzone transition, but with added constraints including:
31 = Nodes are snapped to xzone boundaries at the end of each timestep. A node may 32 "tunnel" to the other side of an xzone if energetically favorable (see "Tunneling 33 Congestion Relief' located elsewhere herein for additional information);
and 34 = The density fields obey specified parity boundary conditions at the edge of each xzone, to ensure physically relevant behavior at the boundary. In some 1 implementations even parity is used, and in some implementations periodic parity is 2 used.

SIMULTANEOUS DYNAMICAL INTEGRATION (SDI) SIMULATION

7 [0220] SDI simulation (also known as Particle In Cell (PIC) simulation) provides 8 approximations to solutions of Newton's second law (i.e. force equals mass multiplied by 9 acceleration, or F=ma), as expressed by a system of coupled ordinary differential equations. For each node, the sum of the forces (also known as forcing terms) acting on the respective node is 11 equal to the mass of the respective node multiplied by the second derivative with respect to time 12 of the state-space representation of the node. In some embodiments nodes are restricted to 13 planar (i.e. two-dimensional) movements, and there are four equations per node (x-position, y-14 position, x-velocity component, and y-velocity component). In some embodiments nodes are not so restricted (i.e. allowed three-dimensional movements), and there are six equations per 16 node (x, y, and z-positions, and corresponding velocity components).

18 [0221] Fig. 4 is a flow diagram illustrating selected details of an embodiment of SDI
19 modeling and simulation, such as operations referred to elsewhere herein ("SDI Simulation"
306, of Fig. 3A, for example). Overall the illustrated processing serves to advance a dynamical 21 system simulation forward in time, updating state-space variables according to Newtonian 22 mechanics. Processing begins ("Start" 401) and the system of coupled ordinary differential 23 equations is approximately solved by numerical integration for a short delta simulation time 24 interval ("Integrate Time Forward" 402).
26 [0222] Changes to all of the state variables for all of the nodes are then simultaneously 27 processed ("Update State Variables" 403), based on the numerical integration. In some 28 embodiments relatively small-scale changes are then made to one or more of the forces and 29 masses of the system ("Micro Adjust Forces" 404 and "Micro Adjust Masses"
405), according to a specified or a computed rate of change, in certain usage scenarios to provide more nearly 31 continuous changes to state-space variables than would otherwise be possible. The changes to 32 the force(s) are in addition to changes naturally arising due to the advancement of simulation 33 time. For example, in some embodiments large-scale force (and mass) changes (such as "Macro 34 Adjust Forces" 304 and "Macro Adjust Masses" 305, of Fig. 3A) are partially effected by incremental changes.

1 [0223] The new system state is examined ("Finished" 406) to determine if the SDI
2 simulation is complete via a test of an end condition. An example termination condition is 3 completion of simulation of a specified time interval. If the SDI simulation is finished ("Yes"
4 406Y), then processing is complete ("End" 499). If the end condition is not satisfied, then flow loops back for further simulation forward in time ("No" 406N). In some embodiments 6 configurable settings are adjusted prior to or in conjunction with continuing SDI simulation 7 (such as settings associated with "Commands and Parameters" 130, of Fig. 1).

9 102241 Numerical integration techniques compatible with the time-integration include Runge-Kutta, predictor-corrector, leap-frog, and other similar integration techniques. Various 11 embodiments use any combination of integration techniques.

13 [0225] In some embodiments the time-integration is according to a fixed timestep, 14 while in other embodiments the integration is according to an adaptive timestep. The adaptive timestep results in reduced integration costs during system simulation time periods of slowly 16 changing state variables and improved numerical accuracy during system simulation time 17 periods of rapidly changing state variables, or otherwise "stiff' governing equations. In some 18 embodiments the integrator (such as used in "Integrate Time Forward" 402) receives an input 19 Delta-t (an amount to advance system simulation time). In some embodiments the integrator provides an actual Delta-t (an amount system simulation time actually advanced during the 21 integration) and a suggested Delta-t for use in succeeding integration timesteps. In some of the 22 adaptive timestep embodiments one or more of the actual and suggested Delta-t values are used 23 to control the adaptive timestep.

[0226] While the discussion.of SDI is specific to global placement, the technique is 26 applicable to other functions of the aforementioned place and route flow, including any 27 combination of global placement, legalization, detailed placement, and routing.

LEGALIZATION

32 [0227] Conceptually legalization determines if the global placement is likely to be 33 usable for a successful detailed place and route, and if not, legalization attempts to improve 34 placement before proceeding to detailed placement. The determination of suitability for detailed placement includes assessing one or more metrics correlated with local solvability of placement 36 (and routing) problems not addressed by global placement. In some embodiments one of the 1 metrics includes sectioning all of the devices according to a grid (such as a regular grid) of 2 analysis windows, and determining locally if within each analysis window resources exceed (or 3 fall below) requirements. If all of the analysis windows are simultaneously solvable (i.e.
4 available resources meet or exceed requirements), then detailed placement and routing is likely to succeed without additional refinements to the global placement.
Improvements, or corrective 6 actions, may take various forms including any combination of "moving"
devices from one 7 region to another, transforming devices from one implementation form to another, and 8 partitioning-related strategies.

[0228] Fig. 5A is a flow diagram illustrating selected details of a first embodiment of 1.1 resource reconciliation, as a first example of legalization (such as "Legalization" 203, of Fig. 2).
12 Overall the flow includes determining a size of an analysis window and allocating all devices in 13 groups to their respective containing windows, and sub-dividing and transforming logic 14 functions to reduce resource over-subscription. The flow also includes checks to determine if the devices allocated to each window may be implemented with the resources available in the 16 window (i.e. no analysis window is over-subscribed), and if continued iterations are likely to 17 provide improved results.

19 102291 Processing begins ("Start" 501) with global placement information (such as produced by "SDI Global Placement" 202, of Fig. 2, for example). The global placement result 21 may not be legal (i.e. in a standard cell flow devices may be overlapping, or in a structured array 22 flow more resources may be used than are locally available), but is good enough to continue 23 processing via refinement techniques implemented in legalization. An analysis window is 24 determined ("Quantize" 502), corresponding to a quantization block size, and conceptually replicated in a regular contiguous (but not overlapping) fashion such that all of the devices in the 26 netlist are allocated to one (and only one) window (some windows may be devoid of devices).
27 In some embodiments relating to a structured array design flow, the analysis window is a 28 rectangular.shape having a size that is an integer multiple of a corresponding SAF tile. In some 29 embodiments the analysis window is aligned with respect to SAF tiles.
31 [0230] A first determination as to whether all of the analysis windows (also referred to 32 as quantization blocks or simply "Q-Blocks") are simultaneously legal, i.e.
none are over-33 subscribed, is made ("All Q-Blocks OK?" 503). If all of the Q-Blocks are legal, then 34 legalization processing is complete ("OK" 203Y) and processing continues to detailed placement (see Fig. 2). Otherwise ("No" 503N) the devices are sub-divided ("Partition"
504) via 1 partitioning strategies including any combination of fixed blocks, recursive bisection, and other 2 similar techniques, according to embodiment.

4 102311 A second legalization check is performed ("All Q-Blocks OK?" 505) that is substantially similar to the first check. As in the first checking case, if all of the Q-Blocks are 6 legal, then processing is complete ("OK" 203Y) and the legalized result is ready for detailed 7 placement. Otherwise ("No" 505N) the devices -are transformed (individually or in groups) to 8 logically equivalent formulations having reduced resource over-subscription ("Morph" 506).
91 The transformation, or morphing, operations are directed to manipulate the netlist such that logic functions requiring resources not available in a Q-Block are implemented as logic functions 11 using resources that are available. As an example, an OR function required in a Q-Block 12 exhausted of OR gates may instead be implemented as a NOR gate followed by an inverting 13 gate, if a NOR gate and an inverting gate are available in the Q-Block.
Morphing may be used 14 in usage scenarios including structured array regions.
16 [0232] A third legalization check is performed ("All Q-Blocks OK?" 507) that is also 17 substantially similar to the first check. As in the first checking case, if all of the Q-Blocks are 18 legal, then processing is complete ("OK" 203Y) and the legalized result is ready for detailed 19 placement. Otherwise ("No" 507N) a determination is made as to whether further legalization iterations are likely to result in improvement ("Continue?" 508). If continuing is potentially 21 beneficial ("Yes" 508Y), then one or more adjustments are made to the analysis windows 22 ("Adjust Q-Blocks" 509), and flow loops back to repeat processing starting with quantization.
23 In some embodiments the adjustments include increasing the Q-Block size in one or more 24 dimensions according to a granularity that is an integer multiple of a corresponding dimension of an underlying SAF tile. For example, the Q-Block size may start out as "1 by 1" (i.e. equal in 26 size to the SAF tile), then be increased by one in the first dimension to "2 by 1" (i.e. twice the 27 SAF tile size in the first dimension), and then be increased by one in the second dimension to "2 28 by 2" (i.e. twice the SAF tile size in the second dimension).
Alternatively, the Q-Block size may 29 be successively lowered, or may be increased in one dimension while being decreased in another, according to various embodiments. More than one Q-Block size choice may result in 31 legal or otherwise useful results, according to various characteristics of the results (such as 32 minimum and maximum local resource utilization, and other similar metrics).

34 102331 If it is determined that continuing legalization processing is not useful (i.e. not likely to further a solution), then processing is also complete ("Not OK"
203N) and subsequent 36 processing includes one or more revisions (see Fig. 2). In some embodiments checking if a Q-1 Block size equals or exceeds a predetermined value (either before or after one or more 2 adjustments) is part of the continuation determination, as legalization achieved with relatively 3 smaller Q-Block sizes, in some usage scenarios, is more likely to result in successful detailed 4 placement.
6 [0234] Fig. 5B is a flow diagram illustrating selected details of a second embodiment of 7 resource reconciliation, as a second example of legalization (such as "Legalization" 203, of Fig.
8 2). Flow begins'("Start" 520) and proceeds to determine a window for quantizing ("Quantize at 9 Specified Window Size" 521), binning elements into Q-blocks and optionally morphing selected elements to fmd a legal result. All Q-Blocks are then tested to determine if or to what extent 11 resource conflicts exist ("All Q-Blocks Legal?" 522). If all Q-Blocks are simultaneously free of 12 resource conflicts ("Yes" 522Y), then processing proceeds to mark the current state as a possible 13 solution ("Nominate Current System State as Candidate Solution" 531). A
test is then made to 14 determine if the current Q-Block is a minimum size Q-Bock ("Q-Block Window Size at Smallest Possible Dimensions?" 532). If so ("OK" 203Y), then processing is complete and the 16 result is ready for detailed placement. If the current Q-Block is not the minimum size ("No"
17 532N), then processing proceeds with a smaller window ("Reduce Target Q-Block Window 18 Size" 533). Flow then loops back ("Go to Start" 535) to attempt processing with the reduced 19 window size.
21 102351 If at least one Q-Block has a resource conflict ("No" 522N), then a 22 determination is made as to the severity of the remaining conflicts ("Characterize Extent of 23 Quantization Failure" 523). In some embodiments the determinations include "Easy", "Hard", 24 and "Extreme" cases. Relatively simple conflicts ("Easy" 528) are processed by depletion weighting ("Activate / Tune Depletion Weighting" 524), and relatively more difficult cases 26 ("Hard" 529) are processed by modifications to repulsive (or spreading) force sources ("Adjust 27 Spreading Field Strengths" 525). Processing for the Easy and Hard cases then flows back to 28 repeat all or portions of global placement (as revisions in the context of Fig. 2) according to 29 depletion weighting activation/tuning or adjusted spreading strengths ("Back to Global Placement" 527 and then "Not OK" 203N). Substantially more difficult cases ("Extreme" 530) 31 are processed by partitioning ("Go to Partitioning" 526).

33 [0236] The determination of conflict severity or difficulty may include examination of 34 objective factors (such as a ratio of resources demanded compared to supplied in the Q-Blocks or other computable figures of merit), and may also include examination of subjective factors 36 (such as how much processing time has already been expended during legalization, and other 1 similar progress indicators), according to various embodiments. In certain usage scenarios, upon 2 entry to legalization, there may be a subjective perception that the system is far from legal due, 3 for example, to over-concentration of nodes of one or more resource classes (such as Nand2, 4 Nor2, Mux2, Inverter, and so forth) in certain regions. In some usage scenarios the strength of the spreading forces acting on the over-concentrated resource class is increased, and earlier 6 processing (such as global placement processing with revisions via "Not OK"
203N of Fig. 2) is 7 repeated. In other usage scenarios, if the resource imbalance is mild, then an attempt may be 8 made to gently nudge the system with depletion weighting activated as revised global placement 9 processing (such as via "Not OK" 203N of Fig. 2).
11 [0237] However, if extended time-evolution with increasingly powerful depletion 12 weighting does not resolve the conflicts, then in certain embodiments the quantization failure 13 may ultimately be deemed "Extreme" even though only a comparative paucity of Q-Blocks 14 show only slightly over-subscribed resources. As the depletion weighting influencing factors become increasingly strong, the governing dynamical equations become stiff, and the overall 16 assessment of legalization difficulty may be escalated to extreme, even though over-subscription 17 is small. According to various embodiments assessment of legalization difficulty includes any 18 combination of examining the system state, the netlist topology, the timing constraints and the 19 architecture definition.
21 [0238] In some embodiments of the flow for standard cell implementation technologies, 22 legalization may be pursued via modifications or adjustments to the spreading force strength.
23 For example, the masses of nodes may be directly correlated to the areas of the standard cells, 24 and the capacity of each Q-Block directly correlated to the respective Q-Block area. Thus spreading forces may be used to drive density so that area consumed by nodes within a Q-Block 26 is no greater than the area of the Q-Block. When achieved, legalization is complete and flow 27 proceeds to detail placement. In some embodiments legalization may be pursued via 28 partitioning, optionally in combination with spreading force strength adjustments.

3.1 Partitioning 33 [0239] Fig. 5C is a flow diagram illustrating selected details of an embodiment of 34 partitioning (such as processing performed as a result of "Go to Partitioning" 526, of Fig. 5B).
Flow begins ("Start" 540) and then a technique for partitioning is chosen ("Select Partitioning 36 Algorithm" 541) via any combination of manual (user directed) or automatic (software 1 determined) mechanisms, according to various embodiments. If a Q-Block technique is chosen 2 ("Q-Block Edge Flow" 542), then processing is performed for each Q-Block ("For Each Q-3 Block" 543). If a Bi-Section technique is chosen ("Recursive Bi-Section"
548), then processing 4 is performed for each of a set of progressively smaller windows ("For Each Window" 549), starting, in some embodiments, with a window size equal to the entire place and route region, 6 and proceeding to progressively smaller and smaller windows.

8 [02401 Processing for each Q-Block according to the Q-Block edge flow technique 9 includes determining nodes causing resource conflicts ("Identify Nodes Impinging on Over-Subscribed Resources" 544), followed by choosing an exit edge ("Pick Edge to Flow Through"
11 545) for the nodes that are impinging. Then the nodes are ranked, for example, by separation 12 from the chosen exit ("Prioritize by Distance to Edge" 546) and then moved across the exit edge 13 ("Push Nodes Across Edge Until Legal or Balanced With Respect to Resource Class" 547), thus 14 entering a different Q-Block. After all Q-Blocks have been processed, a determination is made as to whether a legal result has been obtained ("Legal Result?" 559). If a legal result has not 16 been obtained, then one or more revisions are indicated and earlier processing is repeated ((No) 17 "Not OK" 203N). If a legal result has been obtained ("Yes" 559Y), then the current 18 configuration is nominated as a candidate solution, as in other legalization techniques 19 ("Nominate Current State as Candidate Solution" 560). Processing may then proceed to detailed placement ("OK" 203Y), or may return for further legalization processing with a goal of 21 achieving a legal result at a smaller Q-Block size (Not OK, 203N), conceptually as a revision to 22 legalization processing as described with respect to Fig. 2.

24 [0241] Processing for each window according to the recursive Bi-Section technique includes formulating two sections to break the window into ("Introduce Cut Line Across" 550) 26 and then determining resource requirements and availability in each of the sections ("Count 27 Resource Supply / Demand in Each Region" 551). Nodes are then moved between the sections 28 ("Exchange Circuit Nodes Across Cut Lines Until Legal or Fail" 552) until successful ("Legal"
29 557) or no further iniprovements are possible ("Fail" 556). If the result is legal, then the current state is marked as a possible result ("Nominate Current State as Candidate Solution" 553) and 31 then a determination is made as to whether a smaller Q-Block should be attempted ("Desired Q-32 Block Configuration?" 554). If a target Q-Block size has not been reached, then flow returns 33 back ("No" 558) to continue bisecting windows. If the target Q-Block size has been reached, 34 then processing is complete and flow may proceed to detailed placement ("OK" 203Y).

1 [0242] In some embodiments the recursion operations are according to a tail recursion 2 formulation, and testing for the desired Q-Block configuration may include a tail recursion end 3 check (for example, if the next region is smaller than a predetermined end condition size) as an 4 iteration termination condition for recursive window processing. In some embodiments for use in an SAF flow context the predetermined end size is equal to an SAF tile size.

7 [0243] If no further improvements are possible (via "Fail" 556), then flow continues 8 ("Done" 555) where a determination is made as to whether an acceptable candidate solution has 9 been found ("OK" 203Y) and detailed placement may follow, or whether revisions and repetition of earlier processing are indicated ("Not OK" 203N).

12 [0244] Nodes may be selected for speculative migration across the cut line according to 13 any combination of various criteria, includirig proximity to an edge, a footprint onto over-14 subscribed resources, and any other related reason, according to embodiment. In some embodiments speculative expulsion of a node from one side of the cut line to the other side may 16 include morphing operations on any combination of nodes on the one side, the other side, and 17 both sides. The morphing operations are directed to discover suitable implementation forms for 18 all nodes such that nodes in each containing region may be fully implemented using only 19 resources in the respective containing region.

24 [0245] Conceptually detailed placement serves to fine-tune placement as produced by legalization, determining final placement of all the devices of the netlist.
In certain 26 embodiments operations are relatively limited in scope, focusing on optimizations and 27 refmements generally limited to a region corresponding to a Q-Block.

29 [0246] Particular detail placement techniques are described in detail in the SAF
embodiments illustrated herein. Nevertheless, any of a variety of detail placement procedures 31 and techniques may instead be employed, as the specific mechanism for performing detail 32 placement (assignment of devices to specific, non-conflicting )ocations) is not a limiting aspect 33 of the SAF techniques described herein.

[0247] In some SAF embodiments illustrated herein legalization produces Q-Blocks 36 where supply is known to meet demand. Since the SAF already has the resources laid out in 1 some structured manner, there is thus certainty of the existence of a fitting assignment of 2 resource instances in the netlist to resource slots in the SAF.
Consequently, there is no risk of 3 failure to fmd a detailed placement solution, and moreover the Q-Blocks can be detail placed 4 independently, including in certain embodiments, in parallel, concurrent operation.
6 102481 Some embodiments use continuous variables during global placement to specify 7 placement position. Conceptually, the position coordinates determined by global placement in 8 these embodiments may be considered as "optimal "locations for each node, when interpreted as 9 being representative of the collective configuration of all circuit elements. Detail placement attempts to find actual resource slots in the SAF for each resource instance in the netlist such 11 that all resource instances are simultaneously slotted as close as poss'ible to the coordinate 12 calculated during SDI-directed global placement. Stated differently, a collective assignment of 13 all resource instances to resource slots is sought for each resource class in the SAF, such that the 14 overall variance from.the coordinates assigned by global placement (and possibly modified during legalization) is minimized or reduced. Some embodiments slot each node independently 16 in the closest available unoccupied slot (instead of prioritizing individual nodes).

18 [0249] Fig. 6 is a flow diagram illustrating selected details of an embodiment of 19 detailed placement useful in a variety of applications (such as processing performed in relation to "Detailed Placement" 204 of Fig. 2). The illustrated flow may be used in design techniques 21 relating to SAFs. Overall the flow includes determining a prioritized order to satisfy resource 22 requirements and performing limited-scope optimizations, according to various embodiments.
23 The flow may iterate internally to provide successively more refmed solutions, and terminates 24 when an acceptable result is found, or when it is determined that further iterations are not likely to produce improved results.

27 [0250] Flow begins ("Start" 601) upon receipt of placement information as produced by 28 legalization (such as "Legalization" 203 of Fig. 2, for example). As represented by "Assign 29 Resources" 602, resources are prioritized by class. In an illustrative embodiment the prioritization is in accordance with a function of demand for resources of a respective class and 31 supply of SAF resource slots, the slots being consumed by the resource instances of the 32 respective resource class. The prioritization is carried out such that as the percentage of 33 consumed slot supply increases, the priority of the respective resource class is increased, and as 34 the supply of resource slots increases (irrespective of demand), the priority of the respective resource class is decreased. The function is used to evaluate the priority of each resource class, 36 and assignment of resource instances to resource slots is performed one resource class at a time, 1 in the determined priority order of resource classes. In some of embodiments the prioritization 2 is done on a Q=B1ock basis. That is, the function is evaluated with respect to the demand, 3 supply, and consumption local to each Q-Block.

[0251] Iterating through resource classes in priority order, within each resource class 6 the resource instances impinging upon the respective resource class are identified, and an initial 7 assignment of resource instances to resource slots is generated, with each resource instance 8 drawing the closest still-unoccupied resource slot currently available.
Closeness is measured in 9 terms of distance from a slot center to the coordinate assigned by global placement (and possibly modified by legalization), for the node containing the resource instance.

12 102521 Processing continues with a first form of limited-scope refinement ("Pairwise 13 Interchange" 603), where selected pairs of allocated resources are interchanged in an attempt to 14 discover an improved solution. In certain embodiments, within the set of resource instances previously assigned slots, speculative interchanges are considered between every instance and 16 every other slot (whether occupied or not). In other words, a resource instance may be swapped 17 with the instance occupying another slot, or may simply be moved to an empty slot. Each 18 speculative interchange is scored according to a function of the slot position and the preferred 19 position of the occupying resource (as assigned by global placement and possibly modified by legalization). An example function is the sum of the squares of the distances between the slot 21 centers and the preferred positions. Speculative interchanges are accepted with strictly greedy 22 semantics, on the demonstration of a reduced sum of squared distances from instance to slot.
23 The interchange process will eventually stall when the collective variance of resource instances 24 from desired positions can no longer be strictly reduced.
26 [0253] In some embodiments pairwise interchanges may be evaluated according to a 27 predicate:
28 D(p_i,s_j')^2 + D(p_i',sJ)^2 <? D(p_i,s,j)^2 + D(p_i',s_j')^2 29 where p_i is the ideal position of node I;
31 sj is the actual location of slot j; and 32 D(p_i,s_j) is the distance between p_i and s_j.
33 The sum of D(p_i,s_j')^2 over all assignments (i->j) is minimized, according to the predicate.

[0254] When the collective variance may no longer be reduced, any.resource instances 36 of other resource classes that are associated with composite forms (i.e.
forms realizable from I resources of more than one slot, such as an And2 realized from a Nand2 slot and an Inverter slot) 2 participating in the pairwise interchange are placed in an available slot (corresponding to an 3 ancillary resource) that is closest to the resource instance of the respective composite form. The 4 (ancillary) resource instance slot assignments are then marked as locked, and the ancillary instances are thereafter excluded from the set of assignable and revisable resource instances to 6 be placed when a corresponding resource class is subsequently processed.
When all resource 7 classes in the SAF have been processed as described above, a complete and valid initial detail 8 placement for one Q-Block has been rendered, and subsequent optimization processes are 9 enabled.
11 [0255] In certain embodiments, the above.processes ("Assign Resources" 602 and 12 "'Pairwise Interchange" 603) are used in combination with "Dynamic Morphing" 604. In some 13 dynamic morphing embodiments note is made of resource instances that are placed farthest from 14 a respective desired location and improved placement of the forms is attempted by morphing to a functionally equivalent available destination form having a more suitable placement 16 configuration of resources instances. In certain dynamic morphing embodiments, such 17 speculation over implementation form for netlist nodes is combined with iteration over slot 18 assignment and pairwise interchange. In the latter dynamic morphing embodiments various 19 visited states are scored according to collective variance from preferred locations (as described 'above) and the best state that can be found is taken as a result. In certain embodiments states 21 visited are limited by a computational cost criteria.

23 [0256] Flow then continues to a third form of limited scope refinement ("Pin Swap"
24 605), where pin swapping directed to improve routability is performed.
Here, speculation is performed over various functionally equivalent mappings of netlist nets to instance pins. As an 26 example, the inputs of a NAND gate may be interchanged without changing the function 27 implemented in the gate. This and other similar equivalent mappings for other gates and.
28 circuitry are selectively evaluated. By considering such netlist transformations, attempts are 29 made to reduce the difficulty of achieving a fully routed circuit layout.
31 [0257] In some embodiments an optional first-cut attempt at improving timing paths is 32 then performed ("Size Devices" 606). As an example, driver sizing is selectively performed by 33 revising the netlist to employ forms composed of resources with higher drive strengths.
34 Optimization is not limited to such up-sizing. Selective down-sizing of drivers on non-critical paths is also performed, to free up high drive strength resources (such as in an SAF) for use by 36 paths that are more critical.

2 [0258] A determination is then made ("Repeat?" 607) as to whether additional 3 iterations of all or part of the detailed placement flow is likely to improve results. If so ("Yes"
4 607Y), then processing loops back to resource assignment and continues forward again from there. If further iterations are found to be unlikely to offer improvement ("No" 607N), then a 6 determination is made as to whether the results are acceptable ("OK Result?"
608). If so ("OK"
7 204Y), then processing is complete and ready for routing. If the results are not acceptable ("Not 8 OK" 204N), then processing is also complete and subsequent processing includes one or more 9 revisions (see Fig. 2). The repeat and acceptable determinations are made by any combination of automatic (such as software) and manual (such as human inspection) techniques, according to 11 various embodiments.

13 102591 Fig. 6 is an illustrative example of detailed placement, as the order and/or 14 presence of operations 602 through 606 will vary according to embodiment.
That is, many combinations of "Assign Resources" 602, "Pairwise Interchange" 603, "Dynamic Morphing"
16 604, "Pin Swap" 605, and "Size Devices" 606, will have utility as embodiments of detailed 17 placement, including combinations reordering and/or omitting one or more of these operations.
18 As specific examples, some embodiments perform "Assign Resources" 602 and "Pairwise 19 Interchange" 603 but omit "Dynamic Morphing" 604 and "Pin Swap" 605, while other embodiments selectively perform "Dynamic Morphing" 604 and then subsequently perform 21 "Assign Resources" 602 and "Pairwise Interchange" 603.

23 102601 Another embodiment of detail placement re-employs SDI-directed placement 24 methodology (targeted at a resource-level netlist) optionally constrained to a reduced sub-circuit 25. present in a specific Q-Block. In the SDI-directed detail placement embodiment, the specific 26 forcing terms in the system of simultaneous governing equations are modified from that 27 described in global placement, and force models more appropriate to detail placement are 28 substituted. For example, in detail placement, once the Q-blocks are formed and legalized, there 29 is no further need to perform inter-Q-Block interchange of nodes.
Consequently the bulk density fields that were used in global placement to control unsustainable over-concentrations of specific 31 resource types are unnecessary by construction in the detail placement context. Thus the bulk 32 density fields are replaced by forcing terms that represent a spring drawing the resource-level 33 instances of each form toward the position assigned by global placement.
Simultaneously, 34 overlap repulsions arising from pair-wise occupancy exclusions between resource instances of each resource class act to drive the resource instances toward feasible slots while preserving the 1 topological disentanglement that was a key result of the global placement previously obtained by 2 SDI-directed techniques.

4 [0261] The illustrated SAF embodiments emphasize a conceptual separation between global placement, legalization and detail placement, as facilitated by the described form-level 6 netlist abstraction and the technique of morphing and facilitating data structures and SAF
7 enabling properties. The approaches to detail placement used in the illustrative SAF
8 embodiments herein are not=meant to be limiting and other detail placement approaches may be 9 substituted.
11 [0262] In some standard cell implementation technologies, there is no concept of 12 resource classes. In some usage scenarios "slots" correspond to tiled regions of a defined size.
13 Any standard cell may be positioned at any location on a so-called standard cell grid, with the 14 understanding that each standard cell consumes some number of contiguous abutting slots, and 15' that neighboring standard cell instances are non-overlapping.

17 [0263] In some implementations assessment of Q-Block legality by comparing demand 18 for standard cell slots to the capacity of the Q-Block (determined by counting the number of 19 contained standard cell slots), is an uncertain predictor of detail placement success. As an example, consider a Q-Block that is 10 standard cell rows high by 100 standard cell columns 21 wide. The assigned standard cells in the Q-Block would be organized into no more than 10 22 rows, each row limited to 100 units (standard cell columns) in length. A
detail placer may be 23 unable to construct row-sets of instances. Continuing the example, consider 11 standard cell 24 instances of a single cell type, the single cell requiring 51 standard cell columns. Then the Q-Block would be infeasible, even though the slot supply comfortably exceeded demand.

27 [0264] As a result, standard cell embodiments may use a quantization (a Q-Block 28 sizing) that is enough larger than the largest frequently occurring standard cell (in certain usage 29 scenarios standard cells having sequential logic, a.k.a. "sequentials") to improve the likelihood that over-concentrations of unwieldy standard cells will succeed during the slot assignment 31 phase of detail placement. In some embodiments of a detail placer for standard cell design flows 32 the detail placer may include a mechanism for feeding back from detail placement to 33 legalization.

[0265] In one representative standard cell embodiment, the feedback includes operating 36 an iterative partitioner included in the detail placer. Solution of each Q-Block is attempted. If 1 any fail, then the capacity of the failing Q-Blocks is artificially depressed. The partitioner then 2 runs to attempt to redistribute the netlist nodes to distort the net topologies to the least possible 3 extent, while still achieving resource legality in each Q-Block, including the effect of the 4 artificially depressed capacity of certain Q-Blocks for the purpose of inducing the system to move some cells to different neighboring Q-Blocks in the hopes of fmding a soluble 6 configuration. Some embodiments targeting standard cell flows are based upon a conceptual 7 framework where the global-placement position coordinates assigned to each netlist node are 8 deemed ideal when considered as a collective determination, not as an individual determination.
9 Consequently, the standard cell embodiment partitioner preferably seeks to move across the failing Q-Block edges whatever is already closest to the edge, and that can theiefore be 11 displaced slightly with the least distortion in the overall netlist net topology.

13 [0266] In another representative standard cell embodiment, the cells in a Q-Block are 14 grouped into rows, determined through considering relative juxtaposition of the cells in the coordinate that varies perpendicularly to the standard cell rows (such as the y coordinate). Thus 16 cells at higher y position coordinate will be promoted to the row above in preference to cells 17 with lower y position coordinate. Once the rows are formed and the contents optimized until 18 each row fits in the width of the containing Q-Block, layout within the rows proceeds in a 19 similar fashion. Specifically, cells are laid out horizontally within each row, and the global placement assigned x position coordinates are used to determine relative packing order along the 21 standard cell row within each Q-Block.

23 [0267] In another representative standard cell embodiment, the detail placement is 24 solved via a re-employment of the SDI-directed techniques described previously for global placement. The spreading fields of global placement are replaced with forcing terms modeling a 26 spring drawing each netlist cell instance toward the respective node position coordinate 27 determined by global placement. Moreover, pairwise overlap repulsion interactions between 28 neighboring nodes are included and tend to tile the nodes toward net disentanglement.

[02681 In variations of embodiments of detail placement for standard cells, further 31 optimizations may be performed through orientation speculation and pin swapping, e.g. to 32 reduce routing congestion. The optimizations are based upon the observation that each net that 33 crosses a given line contributes to demand for tracks crossing the line. If the demand for the 34 tracks crossing the line exceeds the supply of perpendicular-running tracks, then routing is more difficult. However, the condition of over-demand for routing tracks may be highly localized. If 36 nets crossing the line from opposite directions to reach pins on either side can be swapped, then 1 the track demand is reduced by two. Techniques include pin swapping by exploitation of pin 2 permutability semantics on an underlying standard cell (such as swapping inputs on a NAND
3 gate) and by rotation and flipping a standard cell according to standard rules of the 4 implementation architecture.

9 102691 Conceptually timing closure and timing-driven placement operate to reduce critical timing delays to facilitate higher-speed operation of an implementation of a netlist. A
11 high fidelity timing kernel, in conjunction with precise modeling of interconnect parasitics, 12 specifies timing-driven attractive forces, or modifies effects of one or more net attractive forces 13 used during SDI-directed global placement. Timing-driven forces are derived from a snapshot 14 of state variables of the time-evolving dynamical system simulation. As the dynamical system changes (due to influences of various forces, for example), electrical characteristics of a 16 placement of the associated netlist also change, and effects of the new state variables (such as 17 longer or shorter interconnects) are fed back into a timing kernel to reevaluate timing 18 characteristics of a placement corresponding to the state variables. In some embodiments 19 timing-driven forces are calculated and applied to nets selectively, in certain embodiments as a function of any combination of one or more slack coefficients, worst negative slack values, and 21 total negative slack values. In some embodiments timing forces may also be derived using a 22 path-based approach, where the paths include various critical and near-critical paths according to 23 a placement of the netlist as indicated by the state variables.

[0270] Various quanta of SDI simulation time may advance between timing-driven 26 force re-calculation, from as frequently as a single SDI iteration to as infrequently as an 27 unbounded number of SDI iterations. For example, timing-driven forces may be adjusted on 28 every iteration of the integration timestep or every N iterations, where N
may be provided by a 29 user, or'determined by software, according to embodiment. In some embodiments, the frequency of timing update may be automatically computed by the timing kernel (in an "auto 31 timing-directed-force update mode") depending on the state of the dynamical system. For 32 example, when the system is "hot" (i.e. has a relatively high ratio of kinetic energy to total 33 energy), timing force~updating is perfonned more frequently than when the system is "cold" (i.e.
34 has a relatively low ratio of kinetic energy to total energy). In some embodiments the update frequency is determined in part by tracking system parameters including any combination of a 36 cumulative node displacement since last update, a maximum displacement per net, and other I similar metrics to trigger an auto-update of timing forces. An incremental timing update is 2 performed on a timing graph when relatively small displacements of nodes are detected with 3 respect to the prior update. Iterative slack allocation and net delay budgets are computed on the 4 instantaneous placement every N iterations to adapt the timing budgets based on the time-evolving placements.

7 [02711 Certain high fanout (or portions of high fanout) nets are identified as non-8 critical with respect to timing and have little or no timing-driven forces associated with them.
9 False timing paths and non-critical multi-cycle timing paths are also identified as non-critical and receiye little or no timing-driven force enhancements. In some usage scenarios control nets 11 such as reset and one or more clocks may be recognized as timing non-critical.

13 [0272] Timing critical nets (or portions of nets) are identified and receive relatively 14 stronger timing-driven forces, in certain embodiments based on normalized timing slack determined for the net. Thus a distinct timing-driven force component may be associated with 16 every pin on every net (or any sub-grouping thereof). In embodiments where the connectivity-17 based net attractive force is equal for each pin on a net, the timing-driven force tends to enable 18 prioritizing resultant physical location according to greater timing criticality. At a macroscopic 19 level, timing-driven forces tend to keep timing critical and near timing critical devices in relatively close physical proximity, thus reducing associated parasitics and improving timing 21 performance. The timing-driven forces also tend to guide placements toward solutions where 22 relatively higher drive strength devices are associated with relatively greater parasitic loads 23 (corresponding to longer wire lengths) and relatively lower drive strength devices are associated 24 with relatively lower parasitics (corresponding to shorter wire lengths).
26 [0273] In some embodiments parasitics (for example parasitics of relatively short 27 interconnects) are estimated using a simple bounding box model (i.e. net parasitics are estim,ated 28 as the product of a semi perimeter of a bounding box of the pins on the net multiplied by a 29 constant wire capacitance per unit length). In some embodiments transformations including buffering, clock tree synthesis, driver resizing, timing-based restructuring, and incremental 31 timing post fixes are ignored during parasitic estimation, while in other embodiments the 32 transformations are accounted for by various estimation techniques.

34 [0274] In some embodiments parasitics (for example parasitics of relative long or relatively high fanout interconnects) are estimated after inserting buffer trees and building 36 heuristically constructed near-Minimal Rectilinear Steiner Trees (MRST) of the high fanout nets 1 to accurately and efficiently estimate circuit timing. In some embodiments devices are modeled 2 as having an effective resistance that ignores input ramp time and non-linear timing response 3 effects of the device based on output capacitive load. In some embodiments a high fidelity 4 timing kernel propagates input ramp rise and. fall times (treating them separately), and simultaneously propagates circuit ramp time from various timing start points to various timing 6 end points. Timing exceptions (such as false and multi-cycle paths) are propagated through the 7 timing graph to account for effects of the exceptions.

9 [0275] In some embodiments, during placement, a lumped capacitive interconnect delay model that ignores effects of distributed Resistance-Capacitan ce (RC) trees is used to 11 estimate selected parasitic effects. In some embodiments actual net routing information (or 12 approximations thereof) forms a basis for generation of one or more distributed RC trees for 13 estimating selected parasitic effects.

102761 In some embodiments timing closure is implemented in a Timing Kernel (TK) 16 that dynamically updates a timing graph based on current placement state (that is in turn derived 17 from the locations of the nodes in the SDI simulation). Net and device delays are computed and 18 propagated to slack results on each pin, normalized slack coefficient(s) are determined, and then 19 updated timing-driven forces are generated for use by subsequent SDI
simulation:
21 [0277] The timing graph is a graph data structure representing the netlist and includes 22 pre-computations and pre-propagations of user-defined constraints including any combination of 23 clock 'period, false path and multi-cycle path identifications, arrival times at primary inputs, and 24 required times at primary outputs. In certain embodiments the timing graph is organized as a Directed Acyclic Graph (DAG) data structure. In certain embodiments the pre-computations 26 and pre-propagations are generated only when a new netlist is provided or modifications are 27 made to the current netlist. The timing graph includes timing node elements and timing edge 28 elements. A timing node element represents pins of a macro (such as a morphable-device), and 29 a timing edge element represents connectivity of timing node elements (such as a flattened or non-hierarchical net of the netlist).

32 [0278] Timing delay through a timing node element (also known as a stage delay) is a 33 function of several parameters, including a cell delay (Dj and a wire delay (D,r). The cell delay 34 is a function of input transition time and cell output loading. In some embodiments cell delay values are determined via a cell delay table lookup. The cell delay table may be representative 36 of non-linear timing behavior and is specified in a timing library (such as a portion of 1 "Technology Description" 121 of Fig. 1). Cell output transition times are also a function of 2 input transition times and output loads, and are computed by the TK and propagated from inputs 3 to outputs.

[0279] A Steiner buffered tree constructor creates an interconnect tree based on 6 coordinates of pins of morphable-devices. RC parasitics are then computed from the 7 interconnect tree, and corresponding cell delays are computed according to pi-models of the RC
8 parasitics. Wire delays are computed using Elmore-Penfield-Rubenstein delay models according 9 to estimated net and pin parasitics.
11 [0280] Fig. 7A is a flow diagram illustrating selected aspects of an embodiment of 12 delay path reduction and minimization, as an example of timing closure (such as "Timing 13 Closure" 205, of Fig. 2). As described with respect to Fig. 2, in some embodiments timing 14 closure is essentially operative within global placement, rather than, or in addition to, operative external to global placement. In other words, in some embodiments timing closure operations 16 are performed intimately with operations of global placement (such as those illustrated in Fig.
17 3A). Flows having closely associated global placement and timing improvement are known as 18 having timing-driven global placement. For example, timing-driving forces may be adjusted 19 (such as in "Macro Adjust Forces" 304) on every iteration (via "Repeat?"
307), or the timing-driven forces may be adjusted every N iterations, where N is computed or is provided by a user 21 (such as via "Commands and Parameters" 130, of Fig. 1). The following discussion is according 22 to timing closure operation within global placement, however the technique is applicable in 23 other contexts.

[0281] Processing begins ("Start" 701) with new morphable-device locations as derived 26 from SDI simulated time advancement and resultant node location evolution.
Timing node 27 element locations and associated pin spatial positions are updated accordingly in a timing graph 28 ("Update Pin Coordinates" 702). Approximate interconnect distributed resistance and 29 capacitance values are determined ("Estimate Parasitics" 703) via any combination of an NBB
technique (such as for short interconnects) and a Steiner-route technique (such as for long 31 interconnects).

33 [0282] Driver trees are then added for long and high fanout nets, and nets exceeding a 34 specified maximum capacitance threshold ("Insert Buffers" 704). In some embodiments the driver tress are constructed according to recursive bipartition-based buffering, until a maximum 36 drive capacity has been met. If one or more new devices are added, thus changing the netlist, 1 then processing loops back to repeat parasitic estimation ("Changes", 704C).
If no new devices 2 are added (for example since current buffering is sufficient or maximum drive capacity has been 3 met), then more nearly accurate parasitic approximations are determined, in certain 4. embodiments via Steiner-route techniques, and processing continues ("No Changes" 704N).
6 102831 Delays are then disseminated through the timing graph, including computing 7 new timing edge element specific transition times ("Propagate" 705). Arrival times and required 8 times are also propagated through the timing graph in topological order.
Arrival times are 9 propagated via a Depth-First Search (DFS) order while required times are propagated in reverse DFS order. Spare delay time is then derived for each timing node element of the timing graph 11 ("Compute Slack" 706). The resultant slack times are then normalized and used to determine 12 revised timing weight coefficients and associated timing-driven forces for one or more pins 13 ("Normalize Slack" 707). In some embodiments timing-driven forces are reevaluated only for 14 pins participating in timing critical nets.
16 102841 A determination is then made as to whether the timing closure.is acceptable 17 ("OK Result?" 708). If so, then flow is complete ("OK" 205Y), and processing continues to 18 routing (see Fig. 2). If not, then flow is also complete ("No OK" 205N), but subsequent 19 processing then includes one or more revisions (see Fig. 2).
21 [0285] Fig. 7B illustrates a conceptual view of selected elements of an embodiment of 22 timing-driven forces, such as used during timing-driven global placement.
Driver D 715 is 23 coupled to pins of three loads L, 711, L2 712, and L3.713, and L4 714. Each node is shown with 24 an associated timing slack in parentheses (-2, -1, 0, and -1, respectively). Corresponding timing-driven forces are shown as F, 721, F2 722, F3 723,and F4 724 respectively.
Since the timing 26 slack for L1 711 is the most negative (-2), the corresponding timing-driven force F; 721 is the 27 largest of the three illustrated. Similarly, since the timing slack for L3 713 is the least negative 28 (0), the corresponding timing-driven force F3 723 is the smallest of the three illustrated. During 29 SDI-directed placement, the action of timing forces F, 721, F2 722, F3 723,and F4 724 would be such that the dynamical system nodes corresponding to D 715 and Li 711 would experience a 31 stronger mutual attraction than that between D 715 and L2 712, L3 713, or L4 714 other things 32 being equal. However, in a realistic circuit, many other factors would be simultaneously 33 considered, and moreover, more than one independent critical path could flow through any of 34 the participating nodes. Consequently, the actual motion of the nodes may not turn out to be the same as might be indicated by such a consideration-in-isolation, as the full complexity of the 36 dynamical system may still overcome timing forces acting on any given node.

2 Steiner Route Tree Construction 4 [0286] In some embodiments Steiner-route tree construction is according to a heuristic-based modified Prim-Dijkstra algorithm, including elements of Prim's Minimum Spanning Tree 6 (MST) algorithm and Dijkstra's Shortest Path Tree (SPT) algorithm, using a coefficient alpha 7 that is between 0 and 1. As MST yields minimum wire length (or a spanning tree) and SPT
8 yields a minimum radius tree, the coefficient alpha enables efficient trade-offs between MST
9 and SPT.

12 Resistance/Capacitance (RC) Parasitic Estimation 14 [0287] In certain embodiments, interconnect delay, or wire delay, is determined by modeling a net as a distributed RC network, with load devices presenting a capacitive load on 16 the net. Various approximation schemes may be used, according to embodiment, to estimate the 17 eventual routing for the net before the routing is performed (during placement, for example).
18 The estimated routing is used in turn to derive associated approximate RC
network parameters, 19 and the RC approximations are then used to estimate timing delays, as described elsewhere herein.

22 [0288] The RC network is divided into segments, and a wire segment delay is 23 computed for each segment. In some embodiments the wire segment delay is computed 24 according to an Elmore delay model (wire segment delay equals wire segment resistance multiplied by the sum of the wire segment capacitance and all of the associated input 26 capacitances). In some embodiments the wire segment delay is computed according to a higher 27 order moment delay calculation.

29 [0289] In some embodiments routing associated with large (or high fanout) nets is approximated by Steiner tree graph analysis. Delays from a driver to each respective load are 31 then determined as the sum of resistance in series between the driver and the load multiplied by 32 the sum of the capacitance between the driver and the load, where "between"
refers to the tree 33 graph segments coupling the driver to the load.

102901 In some embodiments parasitics for short nets are estimated using net 36 contributing factor heuristics. For example, wire capacitance from a driver to a load is equal to a 1 load contribution factor multiplied by a "NetMSRT" multiplied by a capacitance per unit length.
2 NetMSRT is equal to a Net Semi-Perimeter (NSP) multiplied by an "NSP-FanOut-Scaling"
3 factor. The NSP-FanOut-Scaling factor is equal to one-half the quantity equal to the square root_ 4 of the number of net loads plus one. The load contribution factor describes a relative contribution of a load with respect to all of the loads on the net, and may be expressed as the 6 distance to the load divided by the entire length of the net. Wire resistance is derived similarly 7 to wire capacitance, except resistance per unit length is used instead of capacitance per unit 8 length.

[0291] Fig. 7C illustrates a spatial organization (or topology) of driver D
715 and 11 coupled loads L, 711, LZ 712, and L3 713 and L4 714 of Fig. 7B.

13 [0292] Fig. 7D illustrates an embodiment of NBB estimation of routing to cover the 14 driver and the loads of Fig. 7C. As shown, NBB 725 covers all of the loads and the driver, and is defmed by the spatial locations of D 715,Li 711, and L4 714.

17 102931 Fig. 7E illustrates an embodiment of a rectilinear SRT estimation to cover the 18 driver and loads of Fig. 7C.

[0294] Fig. 7F illustrates an embodiment of estimated RC parasitics associated with the 21 RST of Fig. 7E.

24 Timing Weights Computation 26 102951 In certain embodiments a timing weight is computed for all pins having a 27 negative timing slack. All other pins are considered non-critical. Non-critical nets are marked 28 as inactive nets and no timing forces are applied to them. Non-critical pins are assigned timing 29 weights of zero (and thus affect no timing-driven forces). The timing weight of a pin may be modeled as a function of various timing parameters including pin slack, worst negative slack, 31 total negative slack, interconnect length, and other similar parameters, according to 32 implementation. In some embodiments the timing weight for a pin is equal to the square of the 33 quantity equal to the slack of the pin divided by the worst negative slack of the entire netlist, and 34 in various embodiments the timing weight is computed according to any number of linear and high-order calculations. The timing-driven forces are computed according to Hooke's law with 1 a coefficient equal to the respective timing weights (i.e. timing force equal to negative timing 2 weight multiplied by distance between driver node and load node).

Selected Timing Closure User Commands 7 [0296] Timing closure and timing-driven placement are automated to varying degrees 8 according to embodiment. In certain embodiments the automation is controlled or directed by a 9 plurality of control parameters provided in data files or scripts (such as via "Commands and Parameters" 130, of Fig. 1). In some embodiments a relatively small number of control 11 parameters may be provided by a Graphical User Interface (GUI). Timing constraints are used 12 to perform timing closure and timing-driven placement, and the GUI may also provide for user 13 input of timing constraints files, such as Synopsys Design Constraint (SDC) compatible 14 information, via a "source SDC" command or menu item.
16 [0297] In some embodiments and usage scenarios design automation software 17 (including timing closure and timing-driven placement) may be operated in a batch mode. In the 18 batch mode any combination of selected switches may be specified in a file (such as a "schedule 1.9 file", that may be included in "Commands and Parameters" 130, of Fig. 1).
A first control switch instructs SDI-driven (sometimes also referred to as force-driven) placement operations 21 (such as operations performed by a placement engine) to apply timing-driven forces at each 22 timestep. By default, the forces are turned off in some embodiments. Timing-driven forces are 23 recomputed at predefined intervals, or at a selected frequency with respect to timesteps, as 24 specified by another control switch.
26 [0298] A second control switch instructs SDI-driven placement to perform timing 27 analysis at predefined time intervals of the SDI simulation, and to report a specified number of 28 critical paths or selected critical paths. In certain usage scenarios the report includes some or all 29 of the most critical paths. If the first control switch is on, then the second control switch is automatically turned on also. However, in some usage scenarios, users may keep the first 31 control switch off with the second control on to perform a timing analysis based on a current 32 system configuration. Selected critical paths may then be reported at predefined intervals during 33 SDI-driven placement. The interval may be user specified, and the reported paths may include a 34 selection of the most critical paths, with the report including worst-negative-slack information.

1 [0299] A third control switch controls how frequently a timing update is performed and 2 timing-driven force computation is performed in the SDI simulation (i.e.
when the first control 3 switch is on). In some embodiments a default value for a parameter associated with the third 4 control switch is 50; i.e. every 50 timesteps timing-driven forces are determined anew. In certain usage scenarios a larger value is specified for lager designs. For example if a design is 6 more than one million gates, then an iteration frequency of 100 may be specified. In some usage 7 scenarios the frequency may be adjusted dynamically (either manually by a user or automatically 8 by software). For example, at stages of placement where changes are relatively small (such as 9 later stages of placement), the interval may be increased.
11 103001 In some embodiments GUI "radio buttons" may be provided to enable a user to 12 enable (or disable) any combination of the control switches. In some embodiments a command 13 window (either separate from or associated with the GUI) may be used to specify the third 14 control switch and the associated parameter.

17 SDI-DIRECTED ELECTRONIC DESIGN AUTOMATION (EDA) FLOW

19 [03011 Figs. 8A and 8B collectively are a flow diagram illustrating selected details of an embodiment of an integrated circuit Electronic Design Automation (EDA) flow using one or 21 more techniques including SDI-directed global placement, legalization, legalization-driven 22 detailed placement, timing optimization, and routing. In the illustrations dashed-boxes represent 23 information provided in certain embodiments by users of the flow. In some embodiments 24 element 815 is provided by users of the flow while in other embodiments it is generated by element 813, and thus 815 is shown having a unique dashed-box patterning.

27 103021 As a starting point, a design to be implemented is provided as a Hardware 28 Description Language (HDL) or Register Transfer Language (RTL) specification ("User Verilog 29 / VHDL RTL Design" 812). Libraries are provided describing functional and timing characteristics associated with all library cells that may be implemented on a base wafer, such as 31 a predetermined or prefabricated structured array wafer ("Cell Timing Models (.lib)" 811). The 32 libraries may be accessed by various tools shown later in the flow.

34 103031 The design is then converted to a specific implementation description according to the library and the design specification ("Synthesis" 813). Semiconductor vendor process 36 information such as the number and type of n7etal layers and via layers, process design rules, and I process parameters are provided ("Base Die Description" 814). The die description also 2 includes all die floorplan information associated with implementation as a structured array, i.e.
3 descriptions of SAF tiles. The die description is processed ("Design Create Import Verilog /
4 VHDL" 816) in conjunction with a gate-level netlist produced by synthesis ("Gate-level Netlist (Verilog/VHDL)" 815) resulting in a parsed netlist.

7 ,[0304] Selected improvements are performed, such as buffer deletion, dead logic 8 removal, inverter pair elimination, and constant propagation ("Design Pre-optimization (buffer 9 deletion, dead logic removal)" 817). Then directives to guide the physical design are processed ("Load Floorplanning Constraints (IOs, RAMs, group, region constraints)" 818).
In certain 11 usage scenarios the floorplan constraints are used to "lock" selected elements into desired 12 regions of the die. For example 10 pads may be assigned to the perimeter, and RAMs may be 13 allocated to specific zones. Core logic may be guided to selected areas or grouped together as 14 desired. In some embodiments the floorplan constraints are provided via one or more scripts ("Place Script; Floorplan Script" 822).

17 [0305] Timing performance criteria are then processed ("Load Timing Constraints"
18 819), in some embodiments according to timing libraries ("SDC Timing Libraries (lib)" 823).
19 Information in the timing libraries may be according to an SDC fonnat, and includes input arrival times, output required times, false path identification, and multi-cycle path notations. In 21 certain embodiments subsequently locations are determined for all of the elements in the netlist 22 ("Placement" 820), guided by previously provided constraints. Timing performance 23 improvements are then made to effect timing closure ("Buffering Clock Tree Synthesis Timing 24 Driven Buffering/Resizing" 821). Clock tree synthesis strives to meet desired clock skew constraints, and buffer resizing serves to meet user specified timing constraints.

27 [0306] Processing then flows (via 824) to output post layout design data ("Export: DEF
28 / Verilog" 831). In certain usage scenarios a format compatible with Design Exchange Format 29 (DEF) is used to facilitate interchange with various EDA tools. The output DEF ("DEF" 832) specifies the structure of the design and all placement infonnation. The output Verilog 31 ("Verilog" 834) specifies the post-layout gate-level netlist. The DEF
output is provided along 32 with information describing routing technology ("LEF" 833) to compute interconnect details 33 ("Router" 835). The resultant geometry is output as DEF ("Routed DEF" 836) that is processed 34 ("3D Extractor" 837) along with the routing technology information to determine connectivity and parasitic information ("SPEF" 839). The parasitic information is according to a Standard 36 Parasitic Exchange Format (SPEF).

2 [0307] A timing performance check is then made ("Timing Analysis" 840) using the 3 parasitic information, the post-layout gate-level netlist, and device characterization information 4 ("StdCell Library" 838). A correctness check is also made ("Formal Verification" 826) by comparing a pre-layout gate-level netlist ("Pre-layout Gate-level Netlist"
825) with the intended-6 to-correspond post-layout gate-level netlist. In some usage scenarios the pre-layout gate-level 7 netlist is identical to the netlist output from synthesis.

9 [0308] The illustrated EDA flow is an example only, as some of the illustrated operations may be omitted or performed in slightly different orderings according to various 11 embodiments.

16 [0309] Conceptually a structured array architecture is defined to satisfy a plurality of 17 user-specific designs. The architecture is optionally based on a pre-characterized standard cell 18 library. A plurality of user-specific designs are targeted for the defmed architecture, and 19 physical layout is generated at least in part based on a SDI-directed place and route flow. An inventory of wafers (or die) built according to the structured array architecture is used as a 21 starting point to manufacture instances of the user-specific designs. Thus a single structured 22 array architecture (and corresponding predetermined wafer inventory) serves to implement more 23 than one user-specific design via a SDI-directed placement and routing.

[0310] Fig. 9 illustrates an embodiment of selected details of manufacturing integrated 26 circuits, the circuits being designed in part based on SDI-directed design techniques. The 27 manufacturing flow begins ("Start" 901) by receiving objectives for a design or a group of 28 designs ("Goals" 902) along with optional information ("Standard Cell Library" 904) regarding 29 relatively fixed-function elements previously manufactured and characterized according to a selected integrated circuit production facility or "fab". The received items are processed to 31 determine one or more SAF tiles to be arrayed to form a structured array integrated circuit 32 ("Define Structured Array" 903). The standard cell library information may be used to' develop 33 SAF'tiles with lower cost than developing SAF tiles from "scratch".
Fabrication images are 34 produced from the structured array design ("Produce Lower Layer Masks"
905).

1 [0311] The lower layer masks are combined with starting materials ("Wafers"
906) to 2 produce an inventory of pre-fabricated structured array die ("Fabricate Lower Layers" 907). A
3 first and a second device are designed according to a SDI-driven place and route flow, and the 4 resultant design databases are provided to the flow ("Device 1 SDI P&R
Result" 908 and "Device 2 SDI P&R Result" 909). Each of the databases is then used to produce corresponding 6 sets of upper layer fabrication images ("Produce Device 1Upper Layer Masks"
910 and 7 "Produce Device 2 Upper Layer Masks" 911, respectively). The upper layer masks are used to 8 manufacture ("Fabricate Device I Upper Layers" 912 and "Fabricate Device 2 Upper Layers"
9 913, respectively) one or more integrated circuits according to each of the respective designs, 'using portions of the previously developed inventory ("Fabricate Lower Layers" 907). The 11 manufactured devices are then tested ("Test Device 1" 914 and "Test Device 2" 915, 12 respectively) and the flow is complete ("End" 999).

3 [03121 Fig. 10 illustrates an embodiment of selected details of a computer system to.
4 execute EDA routines to perform SDI-directed place and route operations.
There are multiple sub-systems.illustrated including computing and storage complexes (System 1001A and System 6 1001B) and workstations (local WS 1017B and remote WS 1017C). Similar elements have 7 identifiers using the same numerical base, and a letter suffix is used to distinguish different 8 instances. For brevity, unless there is a notable difference between the instances, only the first 9 instance of similar elements is described.
11 103131 A data processing machine (System 1001A) includes a pair of computational 12 elements (Processors 1014A and 1015A). Each processor includes a Central Processing Unit 13 (CPUs 1010A and 1011A, respectively) as well as working memory (RAMs 1012A
and 1013A, 14 respectively). The machine is coupled to a storage array, such as disk 1018A, that includes images of EDA software (SW 1019A) and design database information (DD 1020A).
An 16 interconnection resource (Local Area Network LAN 1016) enables local communication 17 between System IOOIA, System 101B, and workstation/PC (WS 1017B) enables local users to 18 access the facilities to direct and observe computations. Systems 1001A and System 1001B are 19 also coupled to Wide Area Network WAN 1030, such as a corporate intranet, the Internet, or both. Remote WS 1017C communicates with any combination of System 1001A and System 21 1001B via WAN 1030. In certain embodiments, WS 1017C has a disk 1018C, that includes 22 images of EDA software (SW 1019C) and design database information (DD
1020C). In some 23 embodiments at least part of the EDA software images may be compressed or encrypted while 24 stored on disk.
26 [03141 SW 1019A may include one or more machine-readable executable files 27 corresponding to any combination of processing operations illustrated in Fig. 1, as well as any 28 processing operations performed on behalf or under control of elements in Fig. 1. For example, 29, global placement (such as SDI-directed global placement), legalization, detailed placement, timing closure, and routing operations may be encoded as portions of SW 1019A
for execution 31 by System 1001A. Similarly, design data (such as data corresponding to any combination of 32 portions of "Commands and Parameters" 130 and "Working Data" 131) may be stored in 33 portions of DD 1020A. In operation the CPUs (in conjunction with associated RAMs) execute 34 portions of SW 1019A to perform assorted EDA functions.

1 [0315] In some embodiments SW 1019A may include routines that are chosen (or 2 optimized) in part to facilitate parallel execution of EDA routines (such as SDI-directed global 3 placement, legalization, detailed placement, and routing) on CPUs 1010A and 1011A. In some 4 embodiments the parallel execution may be carried out on System 1001A
simultaneously (or overlapping) with System 1001B (via LAN 1016) such that CPUs 1010A, 1011A, 1010B, and 6 1011B are operating together to provide a SDI-directed EDA solution for a single user-specific 7 design. The parallel processing is not limited to two machines, nor to machines with multiple 8 internal processors. Rather, the parallel computation may be performed on a collection of 9 processors, however organized or subdivided amongst independent machines.
For example, the software may run on a massively parallel supercomputer, or on a network of multiprocessor 11 computers, or on a network of single processor computers.

13 [0316] In certain embodiments, each of System 1001A, WS 1017B, or WS 1017C
may 14 have an associated removable media drive, represented respectively by drives 1040A, 1040B, and 1040C. The removable media drives are used to load at least parts of the EDA software 16 images, such as those discussed above, from removable media, represented respectively by disks 17 1045A, 1045B, and 1045C. The removable media and the associated drives can take many 18 forms, including but not limited to optical, magnetic, and flash media, including such media as 19. floppy disks, CD-ROMs, DVD-ROMs, and flash disks.
21 103171 In certain embodiments, WS 1017C transfers at least parts of EDA
software 22 images SW 1019C from either or both of System 1001A and System 1001B via WAN 1030.
23. With or without a local EDA software image, according to various embodiments, WS 1017C
24 may interact with either or both of System 1001A and System 1001B for the purpose of locally or remotely executing or controlling any of the global placement (such as SDI-directed global 26 placement), legalization, detailed placement, timing closure, and routing operations, as otherwise 27 taught throughout this disclosure. In various embodiments, WS *1017C
selectively has control 28 interactions and/or data transfers (including data related to the design database information) with 29 respect to either or both of System 1001A and System 1001B. In various embodiments, the transfers are selectively compressed or encrypted. At least parts of the EDA
software images, 31 the control interactions, or the data transfers, are thus observable as propagated signals at points 32 that include signal observation point 1035C and point 1035A.
33 .
34 [0318] In various embodiments, the propagated signals selectively include interactions related to enabling and/or licensing of WS 1017C (or a particular user of WS
1017C) to locally 36 and/or remotely execute and/or control any of the EDA operations taught herein. In certain 1 embodiments, an FTP service is made available to WS 1017C for downloading of at least parts 2 of EDA software image 1019C via WAN 1030. In related embodiments, the downloaded 3 software is adapted to be a demonstration embodiment, with either limited functionality or that 4 functions only for a predetermined interval. In other related embodiments, a software key is used by WS 1017C (obtained via WAN 1030 or other means of distribution) to enable or restore 6 functionality of at least parts of the EDA software, whether the EDA
software was loaded from 7 removable media 1045C or propagated via WAN 1030. In related embodiments, the 8 management and distribution of the software key is a component of the licensing process. The 9 licensing is not limited to workstations. In an analogous embodiment, at least part of System 1001A and System 1001 B are licensed using selective aspects of the above described 11 techniques.

13 [0319] In certain embodiments, executing EDA software, as otherwise taught herein, 14 selectively reports license related events via WAN 1030 to license 'management processes running on at least one designated server. In related embodiments,the reported license related 16 events are evaluated in accordance with predetermined criteria and alerts, reports, control events, 17 and/or billings are selectively and/or automatically created and/or updated.

3 103201 Fig. 11 illustrates an embodiment of an SDI-based detailed placement flow 4 useful in a variety of applications. The SDI-based detailed placement flow may replace and/or augment operations performed after global placement and before routing (such as any 6 combination of processing relating to "Legalization" 203 and "(SDI) Detailed Placement" 204 of 7 Fig.2).

9 [0321] In 1101 a legal global placement is developed (such as via "SDI
Global Placement" 202 of Fig. 2). In 1102 nodes are (optionally) prevented from moving between Q-11 blocks, thus increasing the likelihood that a fitting (i.e. legal) global placement is retained during 12 continued system evolution. In some usage scenarios where circuit density is at or near a 13 threshold of what can be supported in a structured ASIC architecture, the processing of 1102 is 14 invoked. In some usage scenarios where the processing of 1102 is omitted, subsequent legalization processing is used.

17 [0322] In 1103 spreading force strengths are increased, and in some usage scenarios the 18 -spreading forces are substantially increased. According to various embodiments the spreading 19 forces are increased by any combination of reducing digital filtering of fields sourcing the spreading forces, and increasing spatial resolution of a grid the spreading fields are calculated 21 with respect to. In some usage scenarios the (substantial) increase in spreading forces does not 22 result in (substantial) bulk motion since nodes (such as form-level nodes) are prevented from 23 moving between Q-blocks. In some usage scenarios the (substantial) increase in spreading 24 forces does add energy to the system, and various techniques for changing the ratio between kinetic and potential energy of the system may be employed (as described elsewhere herein).

27 [0323] In some usage scenarios processing in 1103 serves to overcome tight packing of 28 form-level nodes that causes local density of the form-level nodes (on relatively short spatial 29 length scales) to exceed slot density (i.e. supply) of the underlying SAF:
In some usage scenarios the exceeding of supply increases effort required by a slot assigner to discriminate 31 between alternate slot assignments. By spreading out the fonn-level nodes and reducing large 32 density fluctuations on short spatial length scales, the form-level nodes within the Q-block are 33 driven farther apart, and thus closer to coordinates of ultimate slot assignments. In some usage 34 scenarios the reduction of density fluctuations serves to reduce dislocation during detail slot assignment, thus improving quality of the detail placement overall.

1 [0324] In 1104 morphing is optionally repeated, with new target locations for form-2 centers. In some usage scenarios nodes demanding a resource may be unevenly distributed in a 3 region, and thus some of the resource-level nodes are moved a comparatively long distance to 4 reach a slot. The movement results in "cut inflation", where nets are forced to be routed over relatively longer distances and thus consume more routing resources than were anticipated by 6 the form-level placement. The cut inflation results in decreased routability. The cut inflation 7 may be overcome- by the optional morphing, to improve the balance between spatial distribution 8 of resource slots and nodes. Nodes are then moved shorter distances during slot assignment, 9 reducing cut inflation and routability degradation.
11 [0325] In 1105 the netlist is elaborated with resource-level nodes and nets spanning 12 pins on the resource-level nodes (see the discussion relating to Fig. 12A
and Fig. 12B). Forces 13 are included to tie resources to respective parent forms. In some embodiments information 14 relating to the resource-level nodes (and associated spanning nets) is retained in extended data structures to facilitate SDI-based processing of the resource-level nodes.

17 103261 In 1106 forces and interaction coefficients are initialized to relatively low values 18 for the new resource-level elements of the combined (i.e. elaborated) netlist. Integration is then 19 resumed in 1107. The resumed integration is according to the forces and interaction coefficients for the new elements in addition to the forces and the interaction coefficients "inherited" from 21 the global SDI-based processing. In some usage scenarios using the new and inherited forces 22 and coefficients together results in disentanglement of the resource-level nodes now present in 23 the netlist. Enabling the resource-level nodes to move independently of each other provides-a 24 context for resources to move left (or right) or up (or down) with respect to sibling resources of the same parent form. The movement of the resource-level forms enables more efficient slot 26 assignments otherwise indistinguishable when only the center of the parent form is examined.

28 103271 In 1108 integration (i.e. time evolution of the system) is stopped according to 29 selected criteria. In some embodiments dampening effects are increased to drive the system toward a new state reflecting separation of resource-level nodes and to prevent or reduce 31 thrashing. In some embodiments the dampening effects are auto-regulated.

33 [0328] The selected criteria may include any combination of a number of integrator 34 steps, an amount of "system time", system kinetic energy (i.e. temperature) falling to a threshold value, system kinetic energy falling by a threshold percentage with respect to an initial value, 36 and system kinetic en.ergy falling by a threshold percentage in a single time step. The number, 1 the amount, the threshold value, and the threshold percentages may be predetermined or 2 programmatically varied according to various implementations and usage scenarios.

4 [0329] In 1109 all Q-blocks are processed. In some embodiments the processing for each Q-block is according to functioris described elsewhere herein with respect to Fig. 13. In 6 1110 processing relating to 1109 is repeated until stopping criteria are met. In some 7 embodiments the criteria include full placement of all resource classes. In some embodiments 8 processing then continues according to functions described elsewhere herein with respect to Fig.
9 14.
11 [03301 Figs. 12A and 12B illustrate concepts relating to an embodiment of netlist 12 elaboration. Fig. 12A illustrates a portion of a system with three form-level nodes located on 13 computational grid 1210 and coupled by a plurality of form-level nets. Fig.
12B illustrates the 14 system of Fig. 12A with resource-level nodes (corresponding to resource-level forms) for each of the form-level nodes "added" to the system. Also illustrated are connections between 16 resource-level nodes and corresponding parent nodes, as well as resource-level nets. The parent 17 connections and resource-level nets are representative of corresponding forces and interaction 18 coefficients that are added to the system as a result of elaboration and in preparation for SDI-19 based detailed placement time evolution. The resource-level nodes and nets may be retained in extended data structures for the SDI-based processing.

22 103311 Fig. 13 illustrates an embodiment of detailed placement of a Q-block. In 1301 23 priority of each resource class in a Q-block is assessed, based on a combination of factors 24 relating to resource supply and consumption. Less supply makes for higher priority, and more consumption makes for higher priority. Note that prioritization results naturally vary from one 26 Q-block to another, as nodes (demand) and available slots (supply) vary from one Q-block to 27 another. Processing according to 1310, 1320, and 1330 is then performed for each resource 28 class in order according to the resource class prioritization.

103321 In 1310 slot d^2 optimized slot assignment for nodes of the respective resource 31 class is performed via one or more techniques identical to or similar to processing associated 32 with elements illustrated or discussed with respect to Fig. 6 (such as "Pairwise Interchange"
33 603). In some embodiments the slot assignment is performed using an implementation 34 dependent technique.

1 103331 In 1320 resource-level macros of the respective resource class are assigned to 2 computed (or destination) slots. The assignments are then "fixed" (i.e.
prevented from moving 3 or being reassigned). According to various embodiments the fixing may be via any combination 4 of a variety of techniques. The techniques include:
= Instantaneous enactment, i.e. a node is moved directly to the destination slot and locked;
6 = Gradual enactment; i.e. a node is propelled toward the destination slot using a slow but 7 overwhelming force, stronger than all other forces acting on the node, so that the node 8 reaches the destination slot in an adiabatic motion over some reasonable number of 9 timesteps and is locked there; and = Direct parametric motion; i.e. a line is drawn from the current position of the node to the 11 destination slot, and the node is moved directly along the line toward the destination slot 12 over a series of timesteps and is locked there.

14 [0334] In 1330 remaining unfixed elements are optionally enabled to relax according to new coordinates corresponding to the destination slot assignments most recently made in 1320.
16 In some embodiments (such as various embodiments using instantaneous enactment) processing 17 in 1330 is performed. In some embodiments (such as various embodiments using gradual 18 enactment or direct parametric motion) processing in 1330 is skipped.

103351 Fig. 14 illustrates an embodiment of an additional pass of detailed placement of 21 a Q-block. Processing according to 1410, 1420, 1430, and 1440 is performed for each resource 22 class in order according to the resource class prioritization determined in 1301 of Fig. 13. Each 23 resource class is unfixed in turn to enable additional relaxation. In some usage scenarios a 24 plurality of iterations of processing of all resource classes according to Fig. 14 is performed.
Unfixing each resource class enables higher priority resource classes (i.e.
classes processed 26 ahead of other classes) to relax with respect to lower priority resource classes (i.e. classes 27 processed behind other classes).

81 ' 3 [0336] In at least some structured ASICs the supply of fundamental hardware resources 4 is predetermined and fixed. Careful apportionment of netlist nodes into function-realization-entities (forms) can help to improve the quality of the physical solution of the EDA flow.
6 However, size and performance constraints cause the form selections of different nodes in the 7 netlist to be coupled, resulting in an extremely complex and thus potentially expensive 8 computational optimization problem. A procedural approach to generating a solution includes a 9 technique making use of Integer Linear Programming (ILP). Illustrative embodiments for circuit placement are described.

12 [0337] A schema for representation of a circuit netlist when nodes of an initial (e.g.
13 synthesis- or schematic-derived) gate level netlist are interchangeable with functionally 14 equivalent alternatives implemented using different hardware resources is used. Herein, each functionally equivalent realization is called a"form", and the initial gate level netlist is called the 16 form-level netlist. Exchanging a form instance in the form-level netlist with a functionally 17 equivalent alternate form is herein called "morphing". Fig. 12A illustrates a form-level net of 18 form-level nodes overlaid on a computational grid. Fig. 12B illustrates one type of view of an 19 elaboration of the form-level net of Fig. 12A to include resource-level nodes in a resource-level net. Fig. 15A illustrates a form of the form-level net of Fig. 12A: In this view the resource-level 21 nodes are shown internal to the form. Fig. 15B illustrates another form that uses different 22 resources to implement the same function as the form of Fig. 15A. In at least one embodiment, 23 the form of Fig. .1 5is substituted for the form of Fig. 15A through a morphing process.

[03381 Fig. 15C illustrates a hierarchy of nodes, having hierarchical nodes, form-level 26 nodes, and resource-level nodes. A top node (T) is the ancestor (parent, grand parent, great 27 grand parent, and so forth) of all nodes in the system. Nodes HI, H2, H3, H4, H5, H6 ... and 28 HN are hierarchical nodes. A hierarchical node is any node with other nodes as children. For 29 example, H4 is a child of H3, and H6 is a child of H4 (and a grandchild of H3). The hierarchical arrangement of nodes is based on the structure of a circuit description, such as resulting from 31 synthesis (e.g. a Verilog or VHDL circuit- description), in some usage scenarios and/or 32 embodiments. Alternatively all or any portion of the circuit description is manually coded by 33 designers. Nodes Fl, F2, F3 ... and FN are form-level nodes representing instances of forms 34 that are compatible with implementation in a structured array, according to a library specification. The top node, the hierarchical nodes and the form-level nodes are associated with 36 a form-level netlist.

2 [0339] Nodes R1 through R8 are resource-level nodes illustrated grouped under 3 corresponding form-level parent nodes. The exact number and type of the resource-level nodes 4 is determined unambiguously by specification of each form type in the library, enabling computation (such as all or portions of placement and morphing) using the form-level netlist 6 (instead of a resource-level netlist). An example of an "inflated" netlist is a netlist that has been 7 augmented ("inflated") with resource-level nodes via a process of "inflation". An example of a 8 "deflated" netlist is a netlist that has been stripped ("deflated") of resource-level nodes (such as 9 added via inflation) via a process of "deflation". Individual, one or more pluralities, or an entire netlist of form-level nodes are inflated/deflated individually or in any combination, all at once or 11 incrementally, according to various embodiments and/or usage scenarios.

13 103401 Fig..15D illustrates selected nets connected between selected nodes of Fig. 15C.
14 Two rows of entities are illustrated. The top row of entities (T, H1, H2 ... HN, F1, F2, F3, F4 ... FN) illustrates the nodes. For clarity of description, the hierarchical arrangement of the nodes 16 is omitted from Fig. 15D (but is illustrated in Fig. 15C). The bottom row of entities (N1, N2 ...
17 NN) illustrates netsof the netlist. Note that while a specific interconnection of nodes by nets is 18 illustrated in the figure, in general nodes are connected to any number of other nodes (zero, one, 19 two, or more). In the figure, each line between a node and a net represents a pin, as discussed elsewhere herein. In general, nodes have pins to a plurality of nets, and a net that connects to a 21 form-level node does so at a distinct port. For example, in Fig. 15A, the illustrated form type 22 AND2_A has three form ports: A, B and Y.

24 103411 Fig. 15E illustrates the nodes and nets of Fig. 15D after augmentation (such as via inflation or elaboration) with resource-level nodes. The top row of entities (T, H1, H2 ...
26 HN, Fl, R1, R2, F2, R3, R4, R5, F3 ... FN,and RN) illustrates the nodes.
The bottom row of 27 entities (N1, N2, N3 ... NN) illustrates nets of the netlist. Compared to the pre-inflation 28 scenario illustrated in Fig. 15D, the same nets connect to the same form ports of the same form 29 instances, but as illustrated in Fig. 15E, there are also connections to some resource-level nodes, with extra pins being determined from the form specification. For example, one of the form-31 level nodes (such as Fl of Fig. 15E) is of a specific form type (such as AND2_A of Fig. 15A).
32 A net connects to a port of the form-level node (such as a port A of Fl of Fig. 15E) and to a port 33 of the specific form type resource-level node (such as port A of NAND2 15,105 of Fig. 15A).
34 The resource-level node is associated with the form-level parent node via inflation of the netlist.

1 [0342] Various operations are performed when inflating a netlist. As a first operation, 2 resource-level nodes are added as children of respective form-level parent nodes. As a second 3 operation, nets connecting to form-level node ports are extended down to resource-level node 4 ports according to a form specification corresponding to the form-level node: As a first operation, zero or more new nets are created that span pins of resource-level nodes, and span no 6 other elements.

8 [03431 As an example of operations that are performed when inflating a netlist, 9 consider Fig. 15A. Selected nets inside of form specification bubble 15,100A
are named. Name nl names a connection from form port A 15,101A to NAND2 15,105 resource port A. Name n2 11 names a connection from form port B 15,102A to NAND2 resource port B. Name n4 names a 12 connection from INV 15,106 resource port Y to form port Y 15,103A. Name n3 names a new 13 connection (a new net) from NAND2 resource port Y to INV resource port A.
Thus a new 14 connection (as illustrated by n3) is added to a netlist whenever a form-level node of form type AND2_A is inflated, while other connections (as illustrated by nl, n2, and n4) result from 16 adding pins to preexisting nets spanning form pins in a form-level netlist.

18 103441 In various embodiments, synthesis is targeted to a library to produce a netlist of 19 form-level elements. The form-level netlist is then elaborated with additional implementation detail (e.g. resource-level nodes and net connections) as determined by form specifications.
21 Placement (such as global or detail placement) and morphing are performed on a form-level 22 netlist (e.g. before elaboration or inflation), and alternatively on an inflated netlist. When 23 morphing operations are performed on a form-level netlist, there is no change to net connectivity 24 on form-level instances, thus providing enhanced computational efficiency in some usage scenarios.

27 [0345] In a structured ASIC, the supply of hardware resources is predetermined and 28 fixed. The optimal selection of implementation form for each node in the form-level netlist is a 29 complex problem involving many coupled considerations. For example, certain hardware resources in a structured ASIC might be faster than others, but if all form-level nodes were 31 morphed into forms that utilize the faster resource, then the total silicon area required to 32 implement a circuit could be greater than otherwise necessary, thus increasing cost of 33 manufacture. A denser placement may be obtained if the form-level instances in the netlist are 34 morphed amongst available forms so aggregate demand for each resource type across all form instances in the netlist follows the same proportional relationship as the supply thereof in the 36 structured ASIC architecture being used to implement the circuit. However, since in such an 1 apportionment, many form instances will be implemented using forms that require slower 2 hardware resources, the circuit may perform slower overall. Careful apportionment of the forms 3 among the nodes of the netlist to optimize overall performance of the circuit is important. Each 4 change of a given form instance from one implementation form to another results in a change to timing characteristics of all, logic paths through the affected node, hence providing another 6 coupling pathway in the form determination process. Similarly, if resource exhaustion forces a 7 node to be implemented using a form such that the nearest available implementation resources 8 are far from the ideal location of the node, then routability degradation may occur.

.10 [0346] There are many uses of morphing in structured ASIC EDA. The following list 11 of examples is provided for illustration only, and should not be taken as limiting.
12 `
13 [0347] As one illustrative example, consider the case of a netlist that is to be placed in a 14 structured ASIC logic array instance. Knowledge of whether the netlist can be packed to fit into the available resource supply of the specified structured ASIC is desired. A
simple tabulation of 16 the resources demanded by the forms in the initial gate level netlist can be performed and 17 compared to the supply of resources in the structured ASIC logic array instance. Fig. 16A
18 illustrates the supply and demand for resources R1 through R6 corresponding to target functions 19 of an integrated circuit design having a first selection of forms for the target functions. For at least some of the resources, the demand exceeds the available supply. However, even if the 21 demand for any resource exceeds supply in the structured ASIC logic array instance, then a fit 22 may still be possible. It may be possible to morph some or all of the nodes in the form-level 23 netlist by exchanging selected form instances with functionally equivalent alternate forms, to 24 relieve the over demand for certain resources while increasing the demand for other underutilized resources. Fig. 16B illustrates the supply and demand for resources R1 through R6 26 for the same target functions as for Fig. 16A, but using a second selection of forms for the target 27 functions obtained by morphing certain forms to use different resources.
For each of the 28 resources shown, the demand is less than or equal to the supply. In this way, a morphing 29. operation can yield a determination of the feasibility of fitting a netlist into a structured ASIC
logic array instance.

32 [0348] As another illustrative example, consider the case of a netlist that is to be placed 33 into the smallest possible accepting logic array instance of the structured ASIC. In this situation 34 the size of the structured ASIC is not predetermined, but is to be an output of the netlist packing 35= optimization problem. Possible approaches tinclude: A) A succession of structured ASIC logic 36 array instances of different sizes are individually evaluated using the fit-checking procedure I described in the preceding example. The smallest structured ASIC logic array instance that is 2 large enough to hold the netlist is the result. B) Morph the form-level netlist until the 3 stoichiometric ratios of the resources demanded by the forms matches as nearly as possible with .4 the stoichiometric provisioning proportions in the structured ASIC. Then the ratio between the corresponding elements in the resource demand versus provisioning yields the required logic 6. array size.

8 103491 . In yet another illustrative example, consider the case of the placement of a 9 netlist within a specified structured ASIC logic array instance. In this case, in addition to determining if a netlist can fit, a complete final placement is sought, such that all resources 11 consumed by forms of a form-level netlist are uniquely assigned to resource "slots" in the 12 structured ASIC logic array instance. One approach is to divide available area into abutting 13 blocks, and then attempt to find a morphing solution that fits a respective portion of the netlist 14 over each block into the respectivce resource complement of the respective block. As with the netlist fit-checking operation described above, there may be an initial imbalance between the 16 resources demanded by the forms and the structured ASIC logic array supply in a given region 17 that can be relieved through morpliing. Only a subset of the nodes in the netlist participate in the 18 morphing operation, and only a portion of the resources of the structured ASIC logic array 19 instance are available for utilization. The block morphing operation is performed on the subset of the netlist that is contained within each of the blocks. The blocks need not be of uniform 21 shape or size. Of course, embodiments such as domain decomposition and netlist subsection 22 morphing are not the only approaches to placement generation. As long as the whole netlist is 23 morphed to fit within the resources of the whole structured ASIC logic array instance, there will 24 be some way that the resources of the form instances in the netlist could be assigned to resource slots.

27 [0350] As an additional illustrative example, consider the case of placement of a netlist 28 into a dynamically sized structured ASIC logic array instance, where the fmal size of the logic 29 array is determined simultaneously with generation of a legal placement.
Such a facility might work by "spreading" the netlist until nodal density fell to a point where block-based morphing 31 (as described above) was successful for all domains containing circuit elements. The size of the 32 final fitting configuration determines the size of the structured ASIC
logic array to be used for 33 the netlist. This example is distinct from the minimum logic array size determination example 34 above, in that the former represents a theoretical maximum packing density determination, where all the netlist form-level nodes participate in the morph, whereas in this case there are 36 many independent morphing problems where a reduced subset of the netlist nodes participate in 1 the morphing operation: The size of the logic array instance that can be obtained in this way will 2 in general be lower bounded by the former "theoretical maximum density"
logic array size 3 described in the earlierexample. In general, the fewer the number of form-level instances that 4 participate in a morphing operation, the less space-efficient the solution will be.
6 [0351] As an additional illustrative example, consider the case of a placement flow that.
7 aims to generate a placement of a netlist using iterative refinement of morphing regions. In this 8 scenario, processing starts with a structured ASIC logic array instance size known to be big 9 enough to hold a morphed version of the netlist (at least as big as the minimum theoretical size produced by the logic array size minimization example in the previous section). A morphing 11 window is defined, initially to be the size of the full structured ASIC
logic array instance. The 12 netlist is globally placed within the window using any available global placement or partitioning 13 technique and morphing operations are attempted in subdomains (or subwindows) of the 14 (previous) morphing window. The subwindows may be constructed by bisection of the enclosing window, or by any other suitable subdivision technique. When the global placement 16 has evolved to the point that each subwindow is morphing soluble, the netlist nodes are 17 constrained to stay within the subwindows, and the subwindows themselves are taken to defme a 18 reduced spatial domain for further global placement or partitioning refinement. In this way, the 19 process proceeds by recursive subdivision of morphing windows, until some limiting cutoff criteria is reached. For example, the process might terminate when the morphing windows reach 21 a size of 10 nanometers^2, or any other similar stopping criteria. Note in particular, that spatial 22 resolution of the recursively refined morphing window grid is not required to be spatially 23 uniform. Indeed, nonuniform spatial resolution refinement grids may be of special utility in 24 situations with complex floorplans.

27 Morphing Techniques 29 [0352] Now,consider a detailed description of some specific techniques for implementing morphing according to various embodiments.

33 Morphing Techniques: Interchange Morpher 103531 An illustrative interchange morphing (problem) solver uses three specification 36 cotnponents:

2 [0354] 1) A library. The library is a statement of available forms, the function each 3 form implements, and quantity of each resource that is utilized by each form.

[0355] 2) Netlist nodes, each node of some particular initial form type. The netlist 6 nodes may be a subset of the netlist.

8 [0356] 3) Capacity of resources provided by the structured ASIC. The capacity may be 9 a subset of total resources available for placement. In some usage scenarios the capacity is specified as an array of integers, indexed by an identifier of the resources in the structured ASIC
11 logic array architecture.

13 [0357] Interchange morphing proceeds in stages, as follows:

[0358] 1) Assess initial demand for resources by accumulating demand for each 16 resource type by the form, of each participating node. In pseudo-code:

18 for each node r do:
19 footprint(r) = 0 for each node n do:
21 f = n.form 22 for each resource r do:
23 footprint(r) = footprint(r) + library.resource_demand( f, r) [0359] If footprint(r) <= capacity(r) for each r, then the nodes fit on entry and no 26 additional morphing is required in order to achieve a fit. In some usage scenarios additional 27 morphing may be desirable, since there are many factors of interest besides just placement 28 feasibility.

103601 2) Take forms without alternates. Depending on the specific construction 31 details of the structured ASIC library, there may be forms with no alternates, i.e., functions with 32 only one way to be implemented in the structured ASIC architecture (that is specified in the 33 library). Forms without alternates will not be morphing since there are no interchange 34 possibilities, so the forms without alternates are taken as is. One way to do this is to remove the forms from the morphing participation set, and remove resources consumed by the removed 1 forms from the resource capacity vector. Alternatively other bookkeeping strategies may be 2 used.

4 [0361] 3) Register balancing. In some structured ASIC architecture configurations, the forms implementing sequential (register) functions are restricted, having much reduced 6 morphability (fewer alternate implementation forms) compared to combinational forms. For 7 example, there may be only one or two sequential resources (flip flops) in the structured ASIC
8 architecture, from which the sequential forms can be built. Often there is only a single 9 sequential form per sequential resource type, for the sequential functions.
In contrast, it is not uncommon for combinational functions to have a dozen alternate implementation forms, with 11 corresponding resource demand touching each non-inverter resource type.
Because of the 12 reduced implementation flexibility, it may be desirable to resolve sequential balancing next.

14 [0362] This can be done, for example, by the following procedure. Score sequential nodes according to respective footprints onto oversubscribed resources. Sort the nodes by the 16 scores, so the higher scoring nodes are considered first for morphing into alternate forms. For 17 each sequential node with a footprint onto an oversubscribed resource, score each respective 18 alternate form according to an objective function, and select the best scoring form. If the 19 selected form is different from the current form, then. a morph is performed. After each morph, check to see if the sequential resources have been brought into alignment with the resource 21 supply. If so, then exit the register balancing processing, and otherwise continue to the next 22 node.

24 [0363] Aspects of certain objective functions will now be detailed. Other objective functions may also be used, thus these embodiments are merely illustrative and not limiting. For 26 scoring sequential forms, founding some usage scenarios it may be useful to accumulate 1(one) 27 for each combinational resource utilized, plus 10 times the number of any oversubscribed 28 resources used by the form. Lower scores are thus preferable. For combinational forms, in some 29 usage scenarios it may be useful to accumulate for each resource 'a' utilized by the form, the quantity:
31 double sa =(100. * cfpa * tfpa)/capacity _a *
32 (tfpa > capacity[a] ? (100. * tfpa / capacity_a) : 1.);
33 where cfpa is the form footprint onto resource a, tfpa is the total footprint onto resource a if the 34 form were to be chosen, capacity[a] is available supply for resource'a' in the current morphing coritext, and capacity_a is the same as capacity[a], unless capacity[a] equals zero, in which case 36 capacity_a is .01 (to avoid division by zero). The formula has the property of heavily costing 1 demand for oversubscribed resources, and of accentuating the cost of using forms with a 2 footprint onto resources that are provided in smaller proportions by the structured ASIC
3 architecture. In some embodiments alternate mathematical formulas provide similar behavior.

[03641 4) Morph combinational nodes. Similar to register balancing, remaining as yet 6 unmorphed non-sequential (e.g. combinational) nodes that have a footprint onto an over 7 subscribed resource are identified. The alternate forms are scored according to the objective 8 function, and the best (lowest cost) morph selected.

[0365] In some usage scenarios the combinational node morphing results in a collection 11 of nodes that have been morphed to fit within a resource supply of a specified problem context.
12 In some usage scenarios the combinational node morphing results are insufficient, and the 13 following additional procedures may be optionally invoked.

[0366] 5) A morph away from an oversubscribed resource may be blocked because 16 alternate forms all have a footprint onto some resource that will become oversubscribed if the 17 morph is taken. Thus ways to "make room" for forms that will be coming out of oversubscribed 18 resources and impinging upon different resources than a starting configuration are searched for.
19 One technique is to "eztract" inverter forms. Since the inverter function can be implemented with essentially any (inverting) combinational resource, there is really no danger of an inverter 21 being unable to be reinserted, if there is room. The technique comprises extracting inverters, 22 scoring forms with a footprint onto oversubscribed resources using the objective function, and 23 then taking the best scoring alternate form. Finally, the inverters (the forms implementing the 24 inverter function) are added back in, morphing as necessary to attempt to achieve a fit.
26 103671 In some usage scenarios 5) is run after the procedures I through 4, although this 27 is not required.

29 [03681 6) Building on 5), morphing may be inhibited whenever a destination resource is fully occupied. Thus in addition to extracting the inverters, any forms that impinge on almost-31 full resources are also extracted. The extracting opens up additional space so that when iterating 32 through the forms impinging on over-subscribed resources, there is more room in resources that 33 previously appeared full. Then the full set of removed nodes are reinserted, morphing as 34 needed.
36 . [03691 In some usage scenarios 6) is run after 5), but this is not required.

3 Morphing Techniques: Integer Linear Programming Based Morphing 103701 Some morphing embodiments use integer linear programming. A linear 6 program is constructed comprising a system of equations and constraints specified over a set of 7 state variables representing the number of forms of each form-type. The formulation includes:
8 1) Function instance conservation constraint equations 9 2) Resource capacity constraints 3) An objective function 12 [0371] The independent system variables are taken to be the number of each form to be 13 utilized. The system variables are constrained to be non-negative integers.
The count of 14 instances of a given form type cannot be either fractional (a given netlist node is implemented exactly and entirely using one specific form in any valid morph state) or negative.

17 [0372] Once the constraint equations and the objective function are specified, the ILP
18 solver returns with the number of each form to be utilized, which optimizes the objective 19 function and satisfies the constraints. Of course, it is possible that no solution exists, if for example, the number of form instances assigned to a region is so great that the forms cannot be 21 packed in, or if there is inadequate morphability in any of the functions.
If there is no solution, 22 then the ILP solver returns a status indicating that no solution could be found.

24 [0373] The function instance conservation constraint equations state that the result will have the same number of instances of each function type as were in the original configuration of 26 the subset of the netlist participating in the morph. Stated another way, the intent of morphing is 27 to select alternate forms implementing the same circuit function, so the action of the morpher on 28 a set of nodes shquld preserve the number of instances implementing each function. Within a 29 function, the distribution of nodes implemented in different forms can change, but the total number of nodes in all the forms implementing the function is the same in the output as in the 31 input. Morphing per se does not change the Boolean algebraic structure of the form-level netlist.
32 (Other optimization technologies unrelated to morphing do that, and use of morphing does not 33 preclude use of the other technologies.) [0374] For example, suppose that the number of form instances implementing the 36 NAND2 function is 5, apportioned on input as 3 form instances using form NAND2_1 and 2 1 using form NAND2_2, and that the number of form instances implementing a MUX4 function is 2 7, apportioned as 3 form instances using MUX4_1, 2 using MUX4_2 and 2 using MUX4_3.
3 Further assume that the state variables x_O, x_1, x_2, x_3, x 4 represent the number of form 4 instances of the forms NAND2_1, NAND2_2, MUX4_1, MUX4_2 and MUX4_3 respectively.
Then the following two constraint equations would be among the set of function instance 6 conservation equations:
7 1*x 0+1*x 1+0*x 2+0*x 3+0*x 4+0*x 5+.... =5 8 0*x 0+ 0*x 1+ 1*x 2+ 1*x 3+ 1*x 4+ 0*x 5+.... = 7 [0375] The resource capacity constraints are inequalities that state that the resources 11 utilized by a given form allocation may not exceed resources that are available. There is one 12 respective constraint inequality for each resource in the structured ASIC
architecture. In the 13 respective inequality constraint for each resource, the coefficient of each state variable is the 14 number of that resource consumed by the corresponding form. The right hand side is the capacity of that resource in the current region context.

17 [0376] For example, consider a morphing problem for a structured ASIC
architecture 18 containing NAND2, NQR2 and INV resources (among others). There are INV_INV, 19 and INV NR2 implementing an inverter function each using one of the INV, NAND2 and NOR2 resources respectively. There is a form XNOR2_1 implementing an XNOR2 function 21 using three NAND2 resources and one NOR2 resource. There is a form XNOR2 2 22 implementing an XNOR2 function using two NAND2 and two NOR2 resources. In the current 23 region there are 400 INV, 100 NAND2, and 150 NOR2 resources. Then the resource capacity 24 constraints would include terms like these:
1*x 0+ 1*x 1+ 1*x 2+0*x 3+0*x 4+... <=400 26 0*x 0+0*x l+0*x 2+3*x 3+2*x 4+... 100 27 0*x 0+ 0*x 1 + 0*x 2+ 1*x 3+ 2*x 4+... <= 150 28 where x_0 represents the number of INV_INV forms,'x_1 the number of INV ND2 forms, x 2 29 the number of INV NR2 forms, x 3 the number of XNOR2 1 forms and x 4 the number of XNOR2 2 forms.

32 [03771 Some structured ASIC architectures have resources that can be reconfigured to 33 switch between different primitive Boolean functions. For example, in some structured ASIC
34 architectures, a mask reconfiguration might allow an abstract resource to be switched between implementing either a NAND2 function or a NOR2 function. Morphing support for such 36 architectures can be accommodated in variations of the integer linear programming formulation 1 by including combination constraint inequalities to constrain the sum of forms implemented 2 using the reconfigurable resources to be no larger than the total possible.
For example, posit a 3 structured ASIC architecture such that within a given region there are 100 NAND2 resources, 4 100 NOR2 resources, and 100 NAND2/NOR2 combinational resources. Label the resource 0, the NOR2 resource 1, and the NAND2/NOR2 combinational resource 2.
Further, 6 represent the footprint of form i onto resource j as R_ij and the supply of resource i as S_i. Then 7 constraint inequalities would include terms like:
8 R 00*x 0+ R 10*x 1+ R 20*x 2+... <= S 0+ S 2 9 R 01*x O+R ll*x l+R 21*x 2+...<=S 1+S 2 .10 (R_00+R_01)*x_0+(R_10+R 1l)*x_l+(R_20+R 21)*x_2+...
11 <= S0+S 1+S 2 13 [0378] The above formulation enables exploration of solutions where the combinational 14 resources are allocated flexibly between either resource behavior, but simultaneously excludes solutions that oversubscribe the simple plus combinational resource supply.

18 Morphing Techniques: Objective Function [0379] In some usage scenarios an ILP solver package allows a user to specify an 21 objective function of the system variables to optimize, as there may be many solution vectors 22 that satisfy the various constraint equations. Without the ILP solver, the best choice'of the many 23 available solutions may not be apparent. An objective function is a function specified as a linear 24 combination of the system state variables. The ILP solver then returns the best solution found, as measured by the objective function. That is, of the range of solutions satisfying the constraint 26 equations, the chosen solution will be the one that maximizes the objective function.
27 F = sum i 0 i x i 28 where i ranges over the number of variables in the system, x_i is the i`h system variable, and O_i 29 is the coefficient to be applied to the ith system variable. More specifically, 0<= i < N_forms, where N_forms is the number of forms in the library and x_i is the number of the corresponding 31 form in the solution.

33 [0380] One particularly useful objective function to use is a so-called "form efficiency".
34 The form efficiency measures efficiency of implementation of each form in terms of respective Boolean computational work that the respective form performs divided by a similar measure of 36 Boolean computational work that could be performed using resources consumed implementing 1 the respective form. In some usage scenarios the efficiency of a form varies between 0 and 1, 2 although the normalization is immaterial to the optimization problem.

4 [0381] Other embodiments use optimization objectives other than form efficiency.

7 Morphing Techniques: Software Implementation 9 103821 An illustrative usage scenario of form morhping follows.
11 [0383] The structured ASIC logic array is divided into regions, and a global placer 12 apportions circuit nodes to the various regions. A morphing controller function then cycles 13 through the regions, identifies respective resource capacities and respective netlist nodes 14 contained within each region, and calls the morpher, passing in the resource capacities, nodes (with the current form assignments), possibly a choice of objective function, and possibly also 16 an indication of the priority of the nodes, and possibly also a function for evaluating the 17 suitability of any given form for any given node.

19 [0384] The morpher evaluates the number of nodes implementing each function present in the set of participating nodes as respective function instance counts according to a library.
21 The function instance counts, along with the resource capacities, are used to formulate the 22 system of equations and inequality constraints, as described above. The coefficients of the 23 objective function are supplied, and the ILP solver is invoked.

[0385] If a solution is found, then the resulting quota of forms (i.e., a particular 26 distribution of form types determined by the ILP solver) is apportioned to the participating nodes 27 in some manner. One illustrative example technique is to pass through the nodes, and test to see 28 if the full quota of the respective current form has been reached yet. If not, take the form, and 29 move to the next node. If so, morph this node to the next not-yet-exceeded form type within its function group.

32 103861 An additional illustrative, but not limiting, example technique for apportioning 33 forms is as follows. Order input nodes according to a priority indicator supplied by a caller.
34 Assign each node to a "preferred" form type (for example, whatever form type the node was assigned by the tool (e.g. a timing-driven synthesis tool) that produced the original form-level I structural netlist), if available. If unavailable, then assign to one of the other forms in the 2 function group (e.g. a lower or higher drive strength logically equivalent form).

4 [03871 An additional illustrative, but not limiting, example technique for apportioning forms is as follows. When a preferred form quota for a node is exhausted, then instead of 6 assigning the node, push the node back onto a queue for subsequent consideration. After all 7 nodes have been visited once, and either assigned or queued, the queue of blocked nodes is 8 reprocessed. Each node of the queue is assigned any of the available alternate forms in a 9 corresponding function group.
11 103881 An additional illustrative, but not limiting, example technique for apportioning 12 forms is as follows. Use the supplied evaluator function to evaluate the form-ranking on a per 13 node basis, thus enabling factors outside the scope of the ILP formulation to affect determination 14 of the apportionment of the quota of forms developed by the ILP based morpher. In other words, the morpher is responsible for determining a fitting set of form quotas, but other systems 16 or techniques are respor-sible for apportioning available forms based on more detailed per-node 17 considerations. For example, timing critical path nodes may receive special treatment.

18 19 [0389] As a specific illustrative, but not limiting, example technique, the externally supplied evaluator function returns a measure of the timing criticality of each node, enabling the 21 order of visitation for form assignment to be in order of timing criticality of the nodes. As nodes 22 are visited in timing criticality order, each is assigned a respective preferred form, if available.
23 The preferred form is determined, for instance, by the timing driven synthesis tool that produced 24 the original structural netlist. If the preferred form for a given node is no longer available (e.g., because the quota for that form determined by the ILP solver has been fully depleted), then the 26 node is either assigned an available alternate form at that time, or is queued for subsequent form 27 assignment after the full node list has been visited once for preferred-form assignment 28 disposition.

[0390] ]n some usage scenarios and/or embodiments, A,problem that is sometimes 31 encountered with the aforementioned approach to timing driven morphing and form assignment, 32 is that the objective function used in the ILP morpher is unable to adequately discriminate 33 between alternate solutions with respect to timing performance of the resulting circuit. The 34 inability is because the objective function is strictly a linear combination of system variables that are quantities of each form-type in a solution.set, whereas the timing criticality is ordinarily a 36 complex and potentially nonlinear function of various different variables.
Thus, in some 1 circumstances there is no way to directly model the timing behavior of the circuit in the ILP
2 objective function. As a consequence, the optimized solution vector returns a reduced quota for 3 some particular form type that is in relatively high demand by timing critical nodes. Thus 4 achievable timing performance of the circuit is curtailed due to inadequate provisioning of desirable forms, even though adequate provisioning is possible given resource capacities of a 6 morphing context.

8 103911 A first technique to address incidents of inadequate provisioning employs 9 critical path preservation. Nodes with pins with negative timing slack are prioritized according to criticality (e.g. magnitude of negative timing slack), and some selectable percentage of the 11 nodes are selected and initially granted respective preferred form assignments. The selected 12 nodes are then subtracted out of the node set of the morphing problem, and the resources 13 consumed by the granted preferred form assignments are subtracted from the resource capacities 14 of the morphing problem. The ILP solver then tries to determine a fitting solution for the remaining nodes from the remaining resource supply. If the solver fails, then the percentage of 16 critical path nodes marked for preservation is reduced, or alternatively the global placement is 17 directed to further spread the netlist to accommodate resource needs of the nodes on the critical 18 paths.
19 =
[0392] A second technique to address inadequate provisioning incidents employs 21 headroom analysis with incremental tracking. Nodes are granted respective preferred forms, 22 exceeding the form quota returned by the ILP solver if necessary. The over-quota granting is 23 accounted for by tracking the resource utilization and comparing the tracked resource utilization 24 to resource capacities in the morphing region. For example, after the ILP
solver returns a result, the resource requirements of the resulting form quota solution vector are tabulated. Then, as 26 nodes are visited for form assignment, if the preferred form of a node is unavailable due to form 27 quota exhaustion, the node is still granted the preferred form as long as headroom remains 28 between the resource requirements tabulation of the ILP form quota result vector, and the 29 resource- capacity vector of the morphing region. Thus a form.of "over-allocation form assignment" results. A mechanism to determine if the headroom remains is by incrementally 31 tracking the resource requirements vector (the resource footprint of the nodes in this morphing 32 problem) and comparing the resource requirements vector to the resource capacities. As long as 33 there is headroom, the over-allocation of form assignments is accommodated.
The incremental 34 tracking adds in the footprint of a chosen form, and subtracts out the footprint of a nominated alternate, thus determining a net change to resource utilization as a result of the over-allocation 1 form assignment. As long as the capacity of each resource in the morphing region is not 2 exceeded by the cumulative form assignments, then the placement is feasible.

3 Timing Driven Force Computation 103931 Timing driven SDI-based placement uses timing forces to systematically 6 influence and optimize timing performance of a placement of elements such as in a design for an 7 integrated circuit. In some embodiments timing characteristics of a circuit are modeled in a 8 timing graph from a time-evolving placement and timing forces are applied by a placement 9 engine as a feedback mechanism. A timing graph may be a Directed Acyclic Graph (DAG) that has nodes that represent pins of a circuit netlist (e.g. provided by a user of the engine and an 11 associated design flow) and edges that represents timing arcs within a library cell and 12 interconnections of the netlist. The timing forces are applied in conjunction with net 13 connectivity forces and spreading forces to improve placement quality as measured by circuit 14 timing performance and routability.
1"5 16 [0394] One approach for modeling timing force for use in.a timing driven SDI-based 17 placement flow is known as a Path-Based Timing Force (PBTF) model. PBTF
heuristics apply 18 proportionate timing forces on each node (or element) of various critical paths, so that when 19 spreading forces are applied according to each critical path, the elements are pushed away or held together based on respective contribution to overall circuit performance.

22 [0395] In various embodiments of a PBTF system, any combination of factors may be 23 used in determining timing force on an element. The factors include:
24 Critical Paths influence Factor (CPF);
Drive Resistance Factor (DRF); and 26 Stage Delay Factor (SDF).

29 Critical Paths influence Factor (CPF) 31 [0396] CPF models contributions of a node to all or any portion of critical paths of a 32 circuit. In various embodiments of a PBTF model usage scenario a timing driven placement 33 seeks to improve any combination of the Worst Negative Slack (WNS) and the Total Negative 34 Slack (TNS) of the circuit. Contributions of a node to the critical paths of the circuit are accounted for to improve the TNS of the circuit.

1 [0397] Fig. 17A illustrates an example circuit with a plurality of critical paths. The 2 critical paths include:
3 Path 1, P, ={No, N2, N3};
4 Path 2, P2 ={No, N2, N4};
Path 3, P3 ={N,, N2, N3}; and 6 Path 4, P4 ={N,, N 2, N4}.

8 [0398] Node N2 is common to all the paths, while all the other nodes are present in two 9 of the four paths. Thus in some embodiments a CPF computation for node N2 will be higher than CPF computations for the other nodes. In some usage scenarios all critical paths of the 11 circuit are explicitly enumerated. In some usage scenarios not all critical paths of the circuit are 12 explicitly enumerated, since there are an exponential number of timing paths, and CPF modeling 13 . builds a heuristic based CPF model for each node of a timing graph.

103991 A CPF score is computed by topologically traversing nodes of the timing graph 16 in forward Depth-First-Search (DFS) order. and reverse DFS order. Two scores are computed 17 for each node: transitive Fanln CPF (FICPF) and transitive FanOut CPF
(FOCPF). The 18 respective CPF score of each node is the product of FICPF and FOCPF.

[0400] FICPF is computed during the forward DFS traversal as a sum of FICPFs of all 21 immediate predecessor nodes of a node if the respective predecessor node is a critical node:
22 node_FICPF = Sum ( critical fanin_FICPF ).

24 [0401] Similarly, during reverse DFS traversal, an FOCPF of each timing graph node is computed as a sum of FOCPFs of all immediate successor nodes if the respective successor node 26 is a critical node:
27 node_FOCPF = Sum"( critical fanout FOCPF ).

29 [0402] Then each node CPF score is computed by multiplying the respective FICPF
and the respective FOCPF:
31 node CPF score = node FICPF * node FOCPF.

33 [0403] CPF is then normalized by dividing the CPF score by the maximum CPF
of the 34 timing graph:
normalized_node_CPF =( node CPF score )/ Max ( node CPF score ). (Eq. 1) 1 104041 Fig. 17B illustrates example computations relating to an embodiment of CPF
2 scoring. Tuples in the figure represent (FICPF, FOCPF) pairs, and underlined numbers 3 represent slack on each node.

6', Drive Resistance Factor (DRF) 8 [0405] DRF models contributions of each node on a critical path based on drive 9 resistances of node drivers. In some usage scenarios drive resistance of a node driver is a significant delay contributor to overall path timing. In one modeling equation that considers 11 first-order effects, stage delay of a gate is computed as follows.
12 gate delay = Ti + Rd * Cl; (Eq. 2) 13 where 14 Ti : intrinsic delay of the gate;
Rd: drive resistance of the gate; and 16 Cl = interconnect capacitance + pin capacitances (i.e. total capacitive load on the output 17 of a gate).

19 [0406] In some embodiments pin capacitances are fixed (or unchanged) during timing driven placement, and thus the timing driven force model is directed to influence interconnect 21 capacitance. According to Eq. 2, improving the product of, drive resistance and total output load 22 tends to improve stage delay of a critical path node. The product may be improved by arranging 23 for drivers with relatively higher driver resistance (Rd) to drive relatively lower capacitive loads, 24 resulting in drivers having relatively low driver resistance (such as some drivers on critical paths) driving higher capacitive loads (such as relatively long wires). In some usage scenarios 26 an incremental delay cost associated with driving a"stretched" wire with a strong driver is less 27 than with a weak driver.

29 [04071 Fig. 18 illustrates an embodiment of a cascade of buffers of increasing drive strength (i.e. decreasing drive resistance). Five levels of buffer are illustrated with relative drive 31 strengths of x ], x2, x4, x8, and x 16 (i.e. each stage provides a factor of two more drive than the 32 preceding stage). 'Nodes driven by the buffers are illustrated respectively as Ni, N2, N3, N4, and 33 N5.

1 [0408] Overall delay of the.path illustrated in Fig. 18 is minimized if all the logic levels 2 have equal delay. Ignoring intrinsic gate delays, the delay for each element of the path is 3 balanced by equalizing respective products of Rd * Cl.
4 Since Rd(x 1) > Rd(x2) > Rd(x4) > Rd(x8) > Rd(x 16) 6 the PBTF system attempts to maintain the following relative capacitive loading ordering:
7 Cl(x ])< CI(x2) < C1(x4) < Cl(x8) < Cl(x16).
8 Since Cl is directly proportional to wire length, and higher timing force tends to result in shorter 9 wire lengths, timing forces are made proportionate to drive resistance.
11 [0409] Relative DRF is normalized by dividing a respective DRF weight of each node 12 by the DRF weight of the node having the least drive resistance:
13 node DRF =( node_DRF_weight Min( node_DRF weights of all nodes ) (Eq. 3) 14 where node_DRF_weight = Drive resistance of the driver gate for the node under 16 consideration.

19 Stage Delay Factor (SDF) 21 104101 Stage Delay Factor (SDF) models stage delay contributions of each driver on a 22 critical net (or net on a critical path) and accounts for the maximum path length of each load pin 23 on the critical net. The SDF combines stage delay and maximum path length factors to assign an 24 SDF force component to each load pin. An SDF force is proportional to the maximum path length associated with the load pin.

27 [0411] The SDF is computed as follows:
28 SDF Factor = dcoeff * exp( lpwpd / min_cycle - 1) (Eq. 4) 29 where lpwpd = load pin: worst path delay;
31 min_cycle = clock period delay of the clock controlling the net; and 32 dcoeff = driver stage delay coefficient.

34 [0412] The dcoeff is.computed as follows:
dcoeff dgsd / dpwpd )* path_levels 36 where 1 dgsd = stage delay of the driver gate;
2 dpwpd = driver pin: worst path delay; and 3 path_levels = number of logic levels in the path.

[04131 Load pin: worst path delay is computed as follows.:
6 lpwpd = AT( load_pin ) + clock cycle - RT( load_pin ) 7 Driver pin: worst path delay is computed as follows:
8 dpwpd = AT( driver_pin ) + clock cycle - RT( driver_pin ) 9 where AT: Arrival time; and 11 RT: Required time.

13 [0414] Fig. 19 illustrates example computations relating to an embodiment of SDF
14 calculation. In the figure:
lpwpd(L,) = 12;
16 lpwpd(L2) = 11;
17 lpwpd(L3) = 7;
18 dpwpd = 12;
19 clock_cycle = 10;
.20 dgsd = 1;
21 SDF(L1) = dcoeff * exp(12/10 - 1);
22 SDF(L2) = dcoff * exp(11/10 - 1); and 23 SDF(L3) = 0.

104151 A stage delay of a driver gate is the sum of the driver gate delay and the 26 interconnect wire delay that is driven by the driver. The driver gate stage delay discriminates the 27 load based on criticality by factoring in the worst path delay of the load pin.

29 104161 If a load pin is part of a slower critical path, then a higher force coefficient is associated with the load pin than a load pin that is part of a relatively faster critical path. The 31 exponential term provides discrimination between two critical paths of unequal lengths. For 32 example, if a first critical path is missing by a target by 2ns while a second critical path is 33 missing the target by 1 ns, then a higher multiplying factor is associated with the first path 34 (compared to the second path) due to the exponential term. Thus critical paths with worse violations are weighted more.

2 Bounding Box Based Pin Force 4 [0417] In some embodiments timing forces are not applied in association with non-critical loads that fanout from a critical driver, thus enabling some relaxation of some (non-6 critical) loads so that more critical load pins of a net may be pulled closer to the driver. In some 7 embodiments timing forces are applied for non-critical pins, if the pins form any portion of a 8 bounding box of a critical net. A bounding box is defined as a rectangle around all the pins of a 9 net. If a non-critical pin is on the edge of the bounding box, then an attractive force is applied to the load pin, thus in some cases reducing total interconnect capacitance (or at least preventing an 11 increase in capacitance).

14 Path based timing Force 16 [0418] A first variant of a path-based-timing-force is:
17 PBTF, = CPF * RDF + SDF
18 where 19 CPF: Normalized_node_CPF (as in Eq. 1);
RDF: Normalized_node_DRF (as in Eq. 3); and 21 SDF: Normalized_node_sdf (as in Eq. 4).

23 [0419] A second variant of a path-based-timing-force is:
24 PBTF2 = CPF * RDF + RSF
where 26 CPF: Normalized_node_CPF (as in Eq. 1);
27 RDF: Normalized_node_DRF (as in Eq. 3);
28 RSF: Normalized_node_RSF; and 29 Normalized_node_RSF = node_slack / Minimum slack of timing graph.

I RELATIVE-SLACK-BASED TIMING FORCE EMBODIMENTS

3 [0420] The SDI technique of optimizing chip placement relies on a variety of forces 4 affecting nodes in a dynamical fashion, integrated forward in time. These forces are chosen to simultaneously improve metrics that constitute a desirable placement, including routability and 6 timing performance, while achieving a physically realizable (legal) configuration. An approach 7 to timing-driven placement is described in what are referred to herein as "relative slack"
8 embodiments. Relative slack embodiments provide improved results (in both absolute 9 performance as well as numerical behavior) in some usage scenarios.
11 [0421] In a first illustrative, but not limiting, class of relative slack embodiments forces 1.2 affecting pins on a critical path (as well as pins on shared nets) are increased or decreased in an 13 incremental fashion, rather than being directly calculated by a timing kernel. In the first class of 14 embodiments, pin-to-pin forces (so-called timing-based or timing-driven forces) affecting nets (e.g. timing-critical nets) are governed by a force law equation having a linear increase with 16 distance (Hooke's law) and a driver-to-load connectivity model. Other classes of relative slack 17 embodiments may employ any arbitrary functional variation with distance, as well as alternate 18 connectivity models. A set of weights governing the timing-based force attraction are 19 periodically updated, and adjusted in to result in successively better relative node configurations with regard to overall circuit performance.

22 [0422] Relative slack embodiments assume existence of a timing kernel that is called 23 during an SDI run to provide relative slack data used in updating the timing driven forces.
24 Specific details of the timing kernel implementation are irrelevant since only r data from a timing graph and propagated pin slacks analysis are needed. The frequency of update can be 26 controlled in a variety of ways: e.g. at regular timestep intervals, in response to a triggering 27 event (dynamical or otherwise), or in response to external (user, script, or graphical) input. Each 28 update provides a "snapshot" of the critical path analysis for every net and pin in the system at 29 that moment of time.
31 [0423] The relative slack as calculated for each pin, as well as the position of connected 32 pins (to handle boundary box effects as noted below), results in an adjustment in the "timing 33 weight" associated with each pin. The timing weight is then used as a multiplier in the force law 34 equation governing pin-to-pin attraction. Pins that need to be moved closer together to satisfy 35. timing constraints tend to have weights increased (modulo possible normalization, noted below), 36 in some usage scenarios in a manner varying with the amount of slack available. That is, the 1 less slack (or more negative slack), the greater the positive adjustment to the attraction. Pins that 2 have excess slack tend to have weights decreased. The reduction in weight on pins that have 3 become "over-tightened" creates additional room for relaxation towards an optimal timing state.

[0424] At least some relative slack embodiments seek to improve timing of nets that do 6 not meet target slack through "boundirig box" (or bbox) contraction. Because increases to total 7 net length result in increased capacitance, the associated timing can be negatively impacted by 8 long distance nets -- even if the associated load pin is not on the critical path. The long distance 9 net effect may be especially pronounced on large designs. The bounding box contraction considers a range of distances from the net bounding box, to help ensure that the bounding box 11 is continuously contracted (otherwise pins on the bounding box may merely trade places).

13 104251 The incremental approach to change in timing forces provides a quiet and 14 consistent approach to timing closure during the course of an SDI run. In some cases where the timing constraints have been unrealistically set, it may be necessary to introduce a maximum to 16 the total timing forces exerted by the system (for example, adding an upper limit to the ratio of 17 timing net energy to total net energy, through a normalization term). A
wide variety of other 18 tunable controls are possible, including.but not limited to:
19 = baseline relative tightening factor (typically small compared to unity);
= target min pin slack (typically zero);
21 = positive pin slack where relaxation may occur;
22 = minimum change in pin slack to consider it in an "improving state";
23 = distance between driver and load piris when no further tightening occurs;
24 = distance from net bounding box where tightening starts to occur;
= min bounding box size when no further "bbox" tightening occurs; and 26 = relative strength of bounding box vs. critical path. tightening terms.

31 104261 An illustrative, but not limiting, relative slack procedural flow is as follows.

33 [0427] First, in at least some embodiments, a pre-processing phase is performed (in 34 other embodiments this might occur as a post-processing phase), where timing weight adjustmeni criteria or timing weights themselves are adjusted to control properties of distribution 1 of the timing weights as a whole. The pre-processing permits balancing resulting timing-driven 2 forces with other effects in the system, such as connectivity forces (affecting routability) and 3 expansion fields (affectiug routability as well as utilization).

[0428] Second, update a timing graph using a Timing Kernel (TK). Using the updated 6 timing graph, for every pin on every timing critical net, the slack associated with the respective 7 pin is calculated (See 20,200 of Fig. 20A).

9 [0429] Third, iterate over all timing critical nets 20,300, and all load pins on the nets 20,400. Fourth, for each load pin on a respective timing critical net, calculate a respective pin 11 timing weight adjustment (20,500 of Fig. 20A and the entirety of Fig. 20B):

13 [0430] 1. Calculate worst slack on the respective net and fmd bounding box 14 pins. The pins are taken from some region around the bounding box of the net (the size of which is determined by performance tuning, scaling by system 16 size).

18 104311 2. Determine if the respective driver pin needs to be factored into the 19 bbox calculation. That is, when the driver pin determines the bounding box position, increasing the attraction to nearby pins that are farther from the bbox 21 = may be counterproductive. The attraction to pins on the far side of the bbox is 22 likely more influential in decreasing the overall capacitance. Fig. 21A_ 23 illustrates a driver D in the interior of a net bounding box region determined by 24 loads L,, LZ, and L4. Fig. 21B illustrates a driver D to one side of a net bounding box region determined by the driver and loads LI, L2, and L4.

27 104321 2a. To focus on connections of loads to the driver, the effect of 28 a driver on a bbox is indirectly applied to the loads themselves, through 29 a multiplication factor on any tightening term.
31 104331 3. For each pin, modify a respective timing weight as needed (see Fig.
32 20B).

34 104341 4. For pins that meet target slack (Yes-path from 21,210 to 21,250):

1 104351 4a. If the slack for the associated is net is negative (No-path 2 from 21,250 to 21,270), then to continue to make positive progress 3 bounding box effects are considered. By taking into account a range of 4 distances from the bbox, rather than a hard boundary, sloshing (oscillations) as pins move onto or off of the bbox is reduced.

7 [04361 If (see decision 21,270) a net is near or on the bounding 8 box of a critical net, then determine how much to tighten up the 9 connection.
11 [0437] If (see decision 21,280) a load pin is within a 12 specified (small) distance from the driver, do nothing 13. (End 21,285), as further tightening of the connection is 14 counterproductive (e.g. result in increased oscillatory motion between the load and driver).

17 . 104381 Otherwise, streategies for tightening (increase 18 weight 21,290) include:
19 - if the bbox size is sufficiently small, then do nothing;
- if a pin is on bbox, then tighten at full strength;
21 - if a pin is farther than a specified distance from the 22 bbox, then do nothing; and 23 - otherwise (in between), then tighten from 0-lx full 24 strength, depending linearly on distance.
.
26 104391 4b. If the pin was not tightened (Yes-path from 21,250 to 27 21,260), then the pin may be considered as a candidate for relaxation 28 (21,260). By allowing connections to either strengthen or weaken, the 29 ability of the system to evolve and relax to an optimal configuration is 30, improved.

32 104401 4b1. The amount of relaxation allowed for the pin 33 connection is dependent on the worst slack for the net. If the 34 pin has positive slack, but the worst case slack on the net is negative, then the amount of relaxation allowed is reduced.

1 Recall that the pin was not tightened, so little is added to the 2 total capacitance on the net.

4 104411 4b2. Further, the relaxatiorn is subject to a reasonable upper bound. Otherwise the weights may drop from substantial 6 to nonexistent in a single pass.

8 [04421 4b3. In both of these cases, by moderating the 9 relaxation allowed during one update cycle, we help prevent sudden movement away from what was potentially a fairly 11 optimal solution. This is manifested as increased sloshing in 12 the overall timing performance.

14 104431 5. For pins having negative slack (No-path from 21,210 to 21,220):
16 104441 5a. If (see decision 21,220) slack of a constrained pin is 17 improving according to a specified criterion, then let the pin continue to 18 evolve without change (Yes-path to End 21,225).

104451 5b. If (see decision 21,230) the driver and load are within a 21 critical distance, then no tightening is performed (Yes-path to End 22 21,235). OtherwiSe tighten the connection (increase weight 21,240), in 23 a manner varying with the ratio of the slack on the pin and the worst 24 negative slack, thus pins most affecting the critical path are likely affected the most.

3 Timing Driven Buffering Overview 104461 Timing driven buffering and resizing for integrated circuit designs, e.g.
6 structured array architectures, provides increased performance, reduced cost, or both. Nets 7 having high capacitance and/or fanout and timing critical nets are preferentially processed to 8 reduce maximum delay and/or transition time, enabling allocation of limited structured array 9 resources to more important nets. Timing driven buffering is performed to generate trees of buffers. Timing driven sizing is performed to upsize selected elements. During the buffering 11 Steiner tree routes are segmented and various buffering options are evaluated for each segment 12 according to buffer cost, required time, and lumped capacitance. The options are sorted and 13 partitioned according to the sort. Computational efficiency is improved by eliminating all but a 14 topmost portion of each partition. Options are further evaluated according to performance including timing and routing costs. Displacement coefficients of macros are computed during 16 the sizing to evaluate desirability of reallocating resources implementing less critical macros to 17 more critical macros. A plurality of low-level implementations of each macro are evaluated and 18 compared. Logic replication and tunneling may be performed according to timing 19 improvements and routing costs. Hold time fixes may be implemented by delaying clocks and/or replacing a fast FlipFlop (FF) with a slower element.

22 [0447] In some embodiments of design flows relating to array architecture based 23 integrated circuits (e.g. structured arrays or other similar Application Specific Integrated Circuit 24 (ASIC) implementations), tiniing driven buffering is used to "reconstruct"
or "re-synthesize"
nets having high capacitive loads or having high fanouts. In some usage scenarios modifying 26 the nets reduces a maximum capacitive load driven by any buffer or driver, or group of 27 elements. In some usage scenarios the modifying reduces a maximum fanout associated with 28 any net or group of nets. In some embodiments a high capacitive load may be driven by a 29 dedicated buffer, or a dedicated tree of buffers. In various embodiments any combination of maximum transition time, maximum rise/fall time, and maximum delay are minimized when 31 performing timing driven buffering.

33 [0448] In some embodiments the timing driving buffering is according to fixed 34 resources available in various structured array architectures. Insome embodiments the timing driven buffering is iterative (e.g. to achieve timing closure). In some embodiments the timing 36 driven buffering accounts for any combination of local and global congestion. In some 1 embodiments the timing driven buffering includes morphing non-buffer resources and allocating 2 the morphed resources as buffers.

4 104491 In some embodiments of array architecture design flows, timing driven gate resizing is used to improve performance of various combinations of highly capacitive and high 6 fanout nets. Logic gates are upsized (i.e. replaced with a gate having an equivalent logic 7 function but greater drive strength) as necessary to reduce maximum delay and/or transition 8 times. In some embodiments the upsizing is via so-called "form replacement"
or replacing a 9 form-level macro with an alternate form-level macro (such as substituting a gate with a higher drive strength for a gate with a lower drive strength).

12 104501 In some embodiments timing driven gate resizing is constrained according to 13 fixed resources available in various structured array architectures. In some embodiments a 14 plurality of resources are simultaneously "swapped" (i.e. deallocated from a first use and reallocated to a second use) to improve critical path timing. In some embodiments the timing 16 driven gate resizing includes morphing non-buffer resources and allocating the morphed 17 resources as "upsized" gates or buffers.

19 [0451] In various embodiments of timing driven buffering and resizing for structured array architectures, timing driven hold tiine fixes are implemented by any combination of 21 morphing, delaying clock signals, and buffering. In some embodiments any combination of 22 logic replication and tunneling are used to improve circuit performance of designs implemented 23 according to a structure array fabric.

[0452] Figs. 22A and 22B illustrate, respectively, an exainple circuit excerpt before and 26 after processing according to an embodiment of timing driven buffering and resizing for an array 27 architecture. Fig. 22A illustrates critical load C2 driven by buffer b2 that is driven by buffer bi 28 that is in turn coupled to Driver. Thus there are two buffers between the driver and the critical 29 load. Non-critical loads NCi and NC2 are also driven by buffer b2. Loads on a critical path from Driver to C2 include cO driven by Driver and Ci driven by buffer bi. Fig. 22B
illustrates a result 31 of timing driven buffering and resizing, as applied to the topology of Fig.
22A, where critical 32 load C2 is driven from new/modified buffer bl'tthat is directly coupled to Driver. Thus there is 33 only one buffer between the driver and the critical load, providing enhanced arrival time for the 34 critical load compared to the topology of Fig. 22A.

1 Structured ASIC Timing Closure 3 [0453] Fig. 23 illustrates a flow diagram of an integrated circuit design flow including 4 an embodiment of processing in accordance with an embodiment of timing driven buffering and resizing foi- an array architecture,e.g. a structured ASIC.

8 Timing Driven Buffering 104541 Fig. 24A illustrates a top-level view of an embodiment of timing driven 11 buffering and resizing for an array architecture. In some usage scenarios timing driven buffering 12 and resizing serves to reduce delays of critical path elements and decrease transition times 13 associated with drivers (or nets or both). Routing-aware buffering is used to reduce maximum 14 congestion in otherwise heavily congested regions.
16 [0455] In some embodiments an initial buffering phase is performed ignoring timing-17 driven constraints, while in other embodiments the initial buffering accounts for timing-driven 18 constraints. According to various implementations timing-driven buffering and resizing includes 19 any combination of net prioritization, global Steiner tree routing, evaluating multiple route trees, computing buffering options, pruning, and determining and selecting a solution.

22 [0456] In some embodiments a buffering subsystem processes nets individually, 23 prioritizing the nets according to timing criticality, enabling preferential treatment for more 24 critical nets. The preferential treatment is according to any combination of buffering resources, wiring resources, and routing congestion (measured according to a metric). In structured array 26 usage scenarios, buffer resources are finite and several nets may be simultaneously competing 27 for the same resources. Ordering nets and processing the most critical nets (or the nets having 28 the highest negative slack) first provides the more critical nets with access to the buffer 29 resources first. In addition, as more iiets are processed, the most critical of the remaining nets have access to wire routing regions most beneficial to routing the remaining nets through. Less 31 critical nets are relegated to more meandering routes to meet region congestion constraints.

33 [0457] In some embodiments the buffering subsystem initially constructs global Steiner 34 tree routes for all nets to estimate heavily congested regions. Routing and/or congestion hotspots that should be avoided while buffering (at least for non-critical nets) are identified.

1 104581 In some embodiments the buffering subsystem initially builds multiple route 2 trees for each driver that couple the respective driver to all loads of the driver. The route trees 3 are heuristic based, and the heuristics include prioritizing critical loads differently than non-4 critical loads and operating with an awareness of the previously identified hotspots. The route tree building includes any combination of shortest path weight and net spanning factor 6 techniques, enabling results having different topologies.

8 104591 In one embodiment of one of the route tree heuristics, loads are first grouped 9 into multiple partitions based on load (or pin) criticality. More critical loads are prioritized for Steiner tree route construction first. Then less critical loads are processed, enabling the more 11 critical loads to have a more direct route from driver to load. In addition, the more critical loads 12 are presented with higher shortest path weight, thus reducing branching of the route tree from 13 the more critical loads to the less critical loads.

104601 In some implementations a Steiner tree based route is decomposed into several 16 segments, such as according to a global cell granularity used when constructing the Steiner tree 17 based route. A dynamic programming technique is used to compute a buffer solution for each of 18 the route trees. The dynamic technique includes maintaining several solutions for each segment 19 to be considered for use to implement a sub-tree of the respective route tree. The respective route tree is processed bottom-up, i.e. al I of the load terminals of the tree are visited before the 21 driver. Buffering options at a segment are computed by combining solutions of all predecessor 22 sub-trees with a current solution.

24 104611 Fig. 25A illustrates a portion of a route tree having several branches decomposed into segments according to processing by an embodiment of timing driven 26 buffering. Child options are a function of downstream options. For example:
27 Options at So = Product(Options at Si, Options at SZ).

29 104621 Fig. 25B illustrates several segment options for segment So of Fig.
25A. The options include no buffering (Optl), a buffer before the branch to segment S2 (Opt2), a buffer on 31 segment Si (after the branch as Opt3), a buffer on segment S2 (after the branch as Opt4), and two 32 buffers after the branch, one on each of segments Si and S2 (Opt5).

34 [0463] If a segment currently being processed is a branch point, then the current segment has multiple sub-trees below it, and each of the sub-trees contains an array of options.
36 The options are merged by perfonning a cross product of option sets. After computing the cross 1 product, each feasible solution for the sub-tree is combined with a buffering solution for the 2 current segment.

4 [0464] Multiple segment options are computed for each segment. The number of options produced is proportional to the number of buffer types (or buffer electrical 6 characteristics) available according to technology associated with an integrated circuit design 7 (such as a standard cell library). In some implementations various options are computed for 8 each segment, including a non-buffered option, a high-drive strength buffer option, and a low-9 drive strength buffer option.
11 [0465] For each option, several parameters are determined, including Buffer Cost (BC), 12 Required Time (RT), and lumped Capacitive Load (CL). The parameters are subsequently used 13 to determine option cost and feasibility. BC measures cost according to the buffering solution 14 for the entire sub-tree "underneath" the segment being evaluated. RT
measures expected required time for a signal at the input of the segment. CL measures cumulative capacitive load 16 of the segment and all associated child segments.

18 [0466] Pruning techniques are used to limit computation, maintaining selected options 19 for each route segment. The selected options chosen are those most likely to result in a "good"
solution according to the root of the route tree. A first pruning technique includes deleting any 21 infeasible solutions, such as a buffering option that has accumulated capacitance exceeding the 22 maximum drive capability according to available buffers. A second pruning technique removes 23 redundant options. An option having higher BC and smaller RT, or an option having higher BC, 24 smaller RT and higher CL, compared to aniexisting option, is considered redundant. A third pruning technique includes trimming the number of options according to an upper bound. In 26 some embodiments the upper bound is variable, while in other embodiments the upper bound is 27 predetermined (at a value such as 10, 20, 50, or 100). In some implementations the options are 28 sorted in order of RT (highest RT first). In some embodiments a contiguous portion of the top of 29 the sorted options is retained, the portion being equal in number to the upper bound (i.e. the "best" options are kept). In sonie embodiments the sorted options are partitioned into four 31 quarters, and a number of options are preserved from each quarter. In some embodiments the 32 number is chosen to be one-fourth of the upper bound. In some usage scenarios the preserving .33 according to partitions enables discovery of solutions that appear locally inferior, but when 34 combined with parent seginents appear superior.

1 [0467] In some embodiments determining and selecting a buffering solution includes 2 evaluating options according to performance (such as arrival time) and (estimated) routing 3 congestion. A disproportionately liigher weighting is applied to timing cost when evaluating a 4 critical net. A buffering solution having lower hotspot (i.e. congestion) cost is preferentially chosen for non-critical nets.

8 Timing Driven Sizing [04681 Fig. 24B illustrates a detail view of selected details of an embodiment of timing 11 driven resizing for an array architecture. Tiining-driven form sizing (or resizing) selects 12 alternate forms to improve any combination of drive capability and stage delay, for example by 13 replacing a lower drive strength gate with a relatively higher drive strength gate. In some usage 14 scenarios macro or form sizing is preferred over buffering when cost of upsizing a driver is less than buffering a net. In some structured ASIC usage scenarios buffer sites are predetermined 16 according to block tiles, and thus the fixed locations of buffer sites may result in relatively high 17 intrinsic buffer cost or associated congestion cost. In some situations there may be no available 18 sites (or slots) near a macro targeted for resizing.

[0469] In some embodiments a form-sizing subsystem attempts to discover nearby sites 21 by (re)implementing the macro using a different set of primitives.
According to various 22 embodiments the primitives correspond to standard cells, structured array tile elements, or other 23 similar low-level resources. In some implementations the form-sizing subsystem is enabled to 24 "displace" (or "move") selected forms (such as forms on non-critical paths) that are initially near the macro that is to be resized. In structured array integrated circuit designs, strictly speaking 26 the forms are not moved, but instead fixed-location sites are deallocated in one area and 27 reallocated in another area.

29 104701 A Displacement Coefficient (DC) of a macro is computed as follows:
DC of macro = Sum (DC of each morphable form within the macro); and 31 DC of a morphable form = Product(primitive densities of all the primitives within the 32 morphable form).

34 [04711 The DC is a conceptual measurement of "placeability" or ease of placement of an element when the elenient is currently unplaced. A macro is more placeable if it may be 36 implemented with more morphable alternatives. A morphable alternative is more placeable if 1 the primitives of the morphable alternative are placeable (or relatively more placeable), such as 2 when there are available (or unused) sites for the primitives.

4 104721 The primitive densities relating to the DCs of morphable forms are computed as follows. A site density grid is constructed that is a two-dimensional matrix of grid resource 6 usage. For each element of the density grid, a number of available resources and used resources 7 are computed for each resource type. Relatively sharp density gradients are smoothed by 8 accumulating density from eight neighboring grid elements to a respective grid element. Thus 9 the computed density at each grid element is an average density at the element in conjunction with eight nearest neighboring elements. The site density grid values are then used to determine 11 the DCs of the morphable forms.

13 [0473] The DC of a morpliable form is computed by looking up the density of each of 14 the primitives of the morphable form, within the site density grid and according to respective primitive types. The morphable form DC coniputation continues by multiplying the look up 16 results (i.e. primitive densities) together. If a particular resource or resource type is depleted (or 17 nearly depleted) within the grid, then the morphable form DC is zero (or nearly zero). Thus the 18 resource depletion results in the placeability of the morphable form being low.

104741 Resizing a macro includes selecting a form from a plurality of implementation 21 choices. Each of the choices is speculatively selected and evaluated with respect to the macro 22 being resized. A timing score is computed that is equal to arrival time at an output of the macro 23 assuming the macro is implemented with the speculatively selected form. If the timing score is 24 poorer than previously saved possible implementation choices, then the current choice is rejected. If the timing score is better, and the drive strength of the speculatively selected form is 26 sufficient to drive the capacitive load at the output, then the speculatively selected form is saved 27 as a possible implementation choice.

29 [0475] In some embodiments placing a macro after determining an implementation according to one or more morphable forms proceeds as follows. New coordinates of the (now 31 form-level) macro are computed based on all of the connections of the form-level macro. The 32 coordinates of drivers of nets connected to all of the input pins of the form-level macro as well 33 as associated net fanouts are used to compute the new coordinates.

[0476] In some embodiments a form placing sub-system performs an attempted 36 placement of each of the possible implementation choices determined during the resizing of the 1 macro. The underlying morphable forms are already prioritized based on the respective timing 2 scores (based on an idealized placement), and the attempted placements are, in various 3 embodiments, performed in priority order (i.e. morphable forms resulting in better arrival times 4 are tried first). Unplaceable morpliable fornis are not considered further.
After a respective placement is found for each placeable one of the morphable forms, the placed morphable form is 6 scored based on timing in the respective placeinent. After attempting to place and after scoring 7 all of the morphable forms, the one of the morphable forms with the best score, if any, is 8 selected. In some embodiments, if there is no placeable one of the morphable forms, the 9 window size is increased and attempted placement of the morphable forms is repeated.
Attempted placement of oiie of the morphable forms succeeds if individual attempted 11 placements of each of the' respective primitives of the morphable form all succeed. Attempted 12 placement of a particular one of the respective primitives of a particular one of the morphable 13 form proceeds as follows.

[04771 A site locator (or slot locator) searches all possible sites around a given 16 coordinate within a certain window size and returns a list of all sites within the window 17 assignable to the particular primitive. In some embodiments, the list is sorted in Manhattan 18 distance from the given coordinate. The list is then examined. In some embodiments, a first 19 acceptable site is selected. In other embodiments, all sites in the list are processed and scored, such as by scoring on DC, and the best-scoring acceptable site is selected. In some 21 embodiments, an acceptable site is one that has a respective DC above a threshold. In various 22 embodiments, unoccupied sites have a respective DC above the threshold, such as a DC of 1.
23 The respective DC of an occupied site is obtained by looking up the DC of the parent form-level 24 macro of the occupied site. If an occupied site is selected, then the parent macro is tentatively scheduled to move from the site (i.e. a primitive in the site will be placed elsewhere) and the 26 particular primitive is tentatively assigned to the site. The tentative move of the parent macro 27 and the tentative assignment of the particular primitive to the site are actualized if the particular 28 morphable form is selected as the one of the morphable forms with the best score. Any parent 29 macros that are scheduled to move are queued to be visited later based on criticality of the parent macros.

33 Timing Driven Logic Replication and.Tunneling 104781 In some embodiments a driver node is logically replicated for nets having high 36 capacitive loading or high fanout. The replication is performed selectively according to 1 evaluations of timing improvements and routing costs. In some embodiments tunneling is 2 performed to move the driver closer to a cluster of loads. In some embodiments the tunneling is 3 performed after evaluating the timing improvements and routing costs.

[04791 Fig. 26 illustrates example results of an embodiment of logic replication and 6 tunneling for an array architecture. The example illustrates a single FF
driving three clusters of 7 load (C), C2 and C3). After replication and tunneling (shown in the lower portion of the figure), 8 the FF is replicated as FFi, FF2, and FF3. Each of the replicated FFs is then placed near the 9 respective cluster driven by the FF.

12 Timing Driven Hold Time Fixes 14 104801 In some embodiments timing driven hold time fixes proceed as follows. Excess setup time (or slack setup time) is determined for each launch FF that is a root of a hold time 16 violation. If there is excess setup time, then in some embodiments the clock signal feeding the 17 launch FF is delayed. In some implementations the delay is via addition of a dummy load. In 18 other embodiments a hold time violation is addressed by morphing the launch FF to a slower FF.
19 In some implementations the morphing is via swapping the (original) launch FF with an unused (or available) slower FF.

3 [0481] Node density in various SDI embodiments is influenced by a variety of effects, 4 including netlist connectivity, circuit performance, and expansion fields.
The former two exert an attractive force between nodes that depends upon netlist topology considerations or critical 6 path analysis. For brevity these are referred to as "connectivity forces".
Without the presence of 7 expansion fields, the connectivity forces tend to draw together nodes into a highly clumped 8 configuration that may exceed local slot resource supply. Spreading of nodes by the expansion 9 fields then serves a twofold purpose: (a) provide solutions to slot assignment over some suitably chosen sub-discretization of a die, and (b) enhance routability, since localized clumping of nodes 11 implies greater local demand for routing resources.

13 [0482] In a chip floorplan that is free of obstructions, very strong expansion fields 14 result in a node distribution that is almost perfectly uniform. However this situation may not be desirable, since some amount of clumping may be beneficial. Once the node distribution 16 reaches the point of routability, further increases to the expansion field strength may only 17 worsen the routing problem by forcing nodes further apart than is optimal, seen by examining 18 cutscores or circuit performance as a function of expansion field strength.

[0483] Further, the demand for routing resources may exceed supply only in very 21 localized regions, while the bulk of the node 'distribution presents a tractable routing problem.
22 The localized regions may occur due to netlist (topological) or floorplan effects. Increasing the 23 expansion field strength to compensate for the "clumpiness" of the node distribution in selected 24 regions affects the distribution as a whole, and in some usage scenarios may be suboptimal. In cases.where the floorplan contains obstructions, the supply of routing resources can be a 26 complex function of position on the die, and here a global approach can fail to have the desired 27 effect.entirely.

29 [0484) The illustrative but not limiting density-driven approaches presented here for addressing the problem of routing congestion in SDI can be categorized as follows:
31 1. Region based 32 a. By factor 33 b. By function 34 2. Steiner cuts based a. Relative 36 b. Absolute (i.e. supply vs demand) 2 [0485] In the illustrative density enhancement embodiments, the density enhancement 3 is inserted between the "fold" and "filtering" phases of node density computation.

[0486] The flow 27,100 for density modification is illustrated in Figure 27.
Note 6 effects introduced by procedures 27,100b, 27,100c, and 27,100d are completely independent of 7 each other and can therefore be applied in any combination.

9 [0487] In procedure 27,100a, the normalization factor is typically taken as the average density, not counting that in excluded regions.

12 104881 In procedure 27,100b, for each defined region that possesses a density scaling 13 factor, the density is multiplied by the associated factor at each density field gridpoint contained 14 within the region. Note this technique is essentially the same as increasing the effective mass for each node contained therein.

17 [0489] Given a statistically uniform node distribution to start with, the scale factor 18 density enhancement tends to drive nodes out of the specified region, ultimately resulting in a 19 node density on the order of (average density) / (scale factor) there, edge effects notwithstanding. Any number of such regions and scale factors can be defined.
Regions may 21 overlap if so desired.

23 [0490] In procedure 27,100c, for each defined region that possesses a density 24 enhancement function, the associated spatial variation multiplied by the normalization factor is added to the existing density. The spatial variation is evaluated at each density field gridpoint 26 contained within the region. In some embodiments an arbitrary functional variation is supported 27 by expressing the function in Tcl (Tool Command Language) and using an embedded Tcl 28 interpreter to return the result of the given expression at each gridpoint.

[0491] The functional variation enhancement may be well suited for the case where the 31 node density becomes very rarefied, e.g. in small channels between obstructions. Iri rareified 32 density situations, the scale factor approach becomes less effective for pushing nodes out of the 33 region, since there are fewer nodes to "push against". The functional variation serves essentially 34 as a background density, only depending on the existing node density through the normalization factor (which is global). As in procedure 27,100b, there is no limit to the number of regions and 36 functions that can be defined, and regions may overlap if desired.

2 [0492] In procedure 27,100d, a Steiner-cuts congestion density enhancement term is 3 added. At this point in the flow, for this density enhancement embodiment, a congestion 4 enhancement value at each gridpoint is available (described in detail below). Adding the congestion enhancement term (times a suitable normalization factor, e.g. the average density) for 6 each gridpoint gives a final result.

8 [0493] The flow 28,200 used to determine the Steiner-cuts congestion term on the SDI
9 grid in the density enhancement embodiment is given in Figure 28.
11 [0494] In procedure 28,200a, a so-called "congestion array" is generated that is a 12 measure of routing congestion output, taken from a Steiner cuts measurement. Since the 13 calculation of routing congestion may be computationally expensive, the congestion array need 14 only be calculated initially and at specified intervals as a run proceeds.
An intermediate grid is used to assert the independence of the congestion array from the particular form of the routing 16 congestion diagnostic, as well as from the SDI grid resolution. The procedures used to create 17 the congestion array are illustrated in Figure 29.

19 [0495] In procedure 28,200b, the congestion array is run-length averaged according to a specified relaxation factor. This helps prevent sudden "shock" to the system (which can cause 21 unphysical fluctuations) every time the congestion array is recalculated, by phasing the change 22 in gradually. The relaxation parameter is chosen to vary from zero (static;
congestion array 23 never changes) to unity (congestion array changes instantaneously).

104961 In procedure 28,200c, a final congestion density enhancement array is 26 calculated. The calculation may be performed once each timestep, in response to configuration 27 changes, or both. Further details are illustrated in Figure 30.

29 [04971 In procedure 29,300a, the Steiner-cuts array is fetched from the generator. In some embodiments a timing kernel (TK) performs procedure 29,300a. The calculation may 31 include an idealized buffer tree, at implementor or user discretion.

33 [0498] In procedure 29,300b, the Steiner-cuts array is subject to a filtering operation to 34 increase smoothness, which helps improve accuracy of a subsequent interpolation procedure. In some embodiments a number of binomial digital filter passes are used.

1 10499.] In procedure 29,00c, the value at each gridpoint in the intermediate grid 2 discretization is calculated using a linear spline approach.

4 [0500] In procedure 30,400a, the congestion array is smoothed using filtering similar procedure 29,300b, in part to improve the accuracy of the interpolation. But filtering is also 6 considered the "final smoothing" phase of the field and is subject to the most user and/or 7. programmatic control, to improve the quality of the fmal result. The smoothing is most effective 8 when the scale lengths associated with the variation of the density enhancement are "semi-9 global", e.g. small compared to the die size, but large compared to motion of a node in a single timestep.

12 [0501] In procedure 30,400b, the congestion array is normalized as needed.
First it is 13 clipped at a pre-determined value of maximum congestion, to constrain resulting density 14 gradients within reasonable limits. In relative spreading mode, a normalization of unity is imposed, thus inducing a density-driven outflow from congested areas without regard to actual 16 routing supply.

18 [0502] In absolute spreading mode, the routing demand versus supply is compared to 19 the maximum allowable relative demand (e.g. 80% capacity). Only at gridpoints where congestion exceeds the allowed limit does the enhancement field take on substantial values 21 (while enforcing a reasonably smooth variation). In the case of a density-gradient model for 22 calculating the expansion fields, the corigestion density field that results is flat everywhere that 23 routing supply satisfies demand, rising smoothly into elevated "mounds" at locations where the 24 demand exceeds supply.
26 105031 The congestion array is then modified according to desired strength of the 27 density enhancement effect. Both multiplicative and power-law transformations may be applied.
28 The strength of the enhancement may be increased over time to allow for the gradual movement 29 of nodes out of congested areas.
31 [0504) In procedure 30,400c, the value of the congestion array at each SDI
gridpoint is 32 calculated using a linear spline approach. .' 3 [0505] In some SDI-based integrated circuit design flow embodiments "tunneling" is 4 used to relieve congestion at boundaries. Tunneling governs transitions of nodes through one or more obstructed regions not available for node placement, i.e. xzones, of a chip (or portion 6 thereof). In some embodiments the transition is according to a mathematical criterion. In some 7 embodiments nodes are selected as tunneling candidates according to node attraction into one of 8 the obstructed regions. In some embodiments the criterion is affected by node density. In some 9 embodiments the criterion is affected by node interconnections (or connectivity). In some embodiments the criterion is affected by circuit performance (i.e. timing).

12 [05061 Tunneling enables further placement progress, according to selected metrics 13 such as routability and circuit performance, while taking into account xzones. Tunneling has 14 several aspects including candidate node selection, nodal move speculation, and node tunneling criteria (i.e. keep move/tunnel or reject).

17 [0507] In some embodiments tunneling is performed at the end of ah SDI
timestep.
18 Any intervening sub-steps taken by the time integrator (e.g. part steps taken by a Runge-Kutta 19 (RK) integrator) are not considered. During the course of a timestep (and any associated sub-steps) the nodes are allowed to drift into xzones in order to allow the time integrator to proceed 21 at full speed, since in some usage scenarios a smooth trajectory in a numerical simulation 22 enables more accurate integration, and thus may enable a longer timestep (given a predetermined 23 accuracy target). At the end of one full timestep, only nodes that have been coerced into xzones 24 are considered for tunneling speculation.
26 [05081 Fig. 31 illustrates an embodiment of a processing flow for node tunneling out of 27 exclusion zones in an SDI-based integrated circuit design flow. In some implementations any 28 combination of the illustrated elemerits are performed by software routines known collectively 29 as a "node mover". In 31,100a nodes are selected as candidates for tunneling based on respective positions. Nodes that have moved into an xzone are included in a set of all transiting 31 nodes. Each respective node will have arrived at the respective position (or point) due to 32 (discounting inertial effects) the vector sum of all forces acting on the respective node. For 33 example, some of the forces may be due to netlist connectivity (i.e. the respective node is drawn 34 towards topologically close nodes) and some of the forces may be due to a local overabundance of nodes (density buildup). In some usage scenarios selecting nodes in xzones for tunneling 1 consideration is anefficient selection. criteria that discriminates nodes likely to benefit from a 2 tunneling transition to another side of an xzone or multiple abutting xzones.

4 [0509] In 31,100b, having determined candidate nodes, per-node initialization is performed. In some usage scenarios total tunneling candidate nodes are a small fraction of total 6 nodes, and for efficiency a secondary set of data structures is used to process the candidate 7 nodes. A transiting node class contains a node id (that maps to an original node entry) and any 8 ancillary data required for accurate tunneling speculation. Henceforth, the class of all node 9 candidates for tunneling is referred to as "transiting nodes".
11 [0510] In 31,100c, all transiting nodes are snapped to the nearest xzone boundary. The 12 snapped position is identical to the resulting node position were no tunneling to occur, and 13 assures a baseline for proper field computation and comparison to the post-transit result.

[0511] In 31,100d, the forces on transiting nodes at the current positions (pre-16 speculation) are evaluated. See the discussion relating to Fig. 32 located elsewhere herein for 17 further information.

19 .[0512] In 31,100e, the position of the transiting node is restored to the positions before processing relating to 31,100c. The node mover then finds the intercept on the xzone boundary 21 that results from application of the force vector components on the node.
In some.embodiments 22 node inertia is also taken into account when determining the xzone boundary intercept. The 23 node is speculatively moved to just past the intercept position, outside the original xzone. In the 24 event that multiple abutting xzones exist and the node lands in yet another xzone, the mover is invoked again using the original trajectory to direct the move. The speculative movment 26 procedure is continued as many times as necessary for the node to arrive in a region external to 27 any xzone.

29 [0513] In 31,100f, the forces on transiting nodes at the new positions (post-speculation) are evaluated. See the discussion relating to Fig. 32 located elsewhere herein for further 31 information.

33 105141 In 31,IOOg, the transition criteria are evaluated and examined. If the transition 34 is accepted, then the node associated with the transiting node acquires the new coordinates.
Otherwise the coordinates as determined in 31,100c are retained. See the discussion relating to 36 Fig. 33 located elsewhere herein for further information.

2 105151 Fig. 32 illustrates an embodiment of SDI-related force calculations in a 3 tunneling congestion relief context. In 32,200a, forces on the node are cleared and preparations 4 are made for the field calculation.
6 105161 In 32,200b, forces on each node due to all non-field interactions are summed, 7 including all connectivity and timing based pin to pin forces, as well as any other nodal 8 interaction forces present.

105171 In 32,200c, gate field,components are computed. The first time through (pre-11 speculation phase), a full field calculation is performed. The pre-speculation phase is with the 12 nodes snapped to the nearest xzone boundary, so the result represents a result assuming no nodes 13 transit. The second time through (post-speculation phase), the field calculation from the first 14 phase is used, but applied to the speculative nodal coordinates. That is, it is assumed that the fields are not significantly changed on a global scale as a result of tiunneling. In some usage 16 scenarios, since only a small number of transitions are considered relative to the total number of 17 nodes, the assumption serves as a reasonable approximation, and may be beneficial for 18 computational efficiency since field computations for each individual speculation are avoided.

105181 Fig. 33 illustrates an embodiment of evaluation of tunneling transition criteria.
21 In 33,300a, the speculative node coordinates are examined to see if there are violations of any 22 node region constraints and if nodes fall into a legal logic area. If there is any violation, then the 23 transition is rejected.

[0519] In 33,300b, a statistical window on how many transitions are considered is 26 applied. In some implementations the window is small (such as 1%, 2%, 5%, or 10%) compared 27 to unity but not so small that an inordinate number of passes through speculator routines are 28 required to process all qualifying nodes. The windowing helps prevent sloshing, where many 29 nodes tunnel from a high to a low density region at once, altering density so much that nodes tunnel back later. In other words, the statistical window helps to ensure that approximations 31 made with respect to 32,200c (of Fig. 32) are valid.

.33 [0520] In 33,300c, a variety of biasing factors are applied. In some implementations 34 the factors are multiplied together. In some implementations one or more of the factors is less.
than unity. The factors include any combination of the following.
36 = A default biasing. factor.

1 = A bias against multiple transitions in a row, to ensure longer relaxation time.
2 = A distance based biasing, to make it more difficult to travel long distances. The distance 3 based biasing may involve either a hard limit or a functional dependence on distance 4 traveled (e.g. linear or quadratic).
= A distance based biasing specific to timing critical nodes. Nodes on a critical path may 6 have an unpredictable effect on timing due to tunneling, so the critical path nodes may 7 be selectively more further constrained than other nodes.

9 105211 In 33,300d, the magnitude of the forces on the node at the old and the new positions are computed. If the new force magnitude after biasing is less than the old force 11 magnitude, then the transition is considered to be energetically favorable and therefore accepted.
12 Otherwise the transition is rejected.

I CLOCK TREE SYNTHESIS (CTS) EMBODIMENTS

3 [0522] CTS is a process for creating a clock network in an Integrated Circuit (IC) 4 physical design flow. CTS has general applicability to design flows having limited placement options for clock buffers, such as SAF-based design flows. Note that although CTS is described 6 herein within a general context of an SDI-based flow, there are applications to other types of 7 design flows using conventional EDA tools. In some usage scenarios a structured ASIC design 8 has one or more clock signals that fan out to many (perhaps thousands) of register clock pins. A
9 register clock pin may be a clock pin of a flip-flop, a latch, or clock pins of embedded memory and other IP blocks.

12 [0523] Clock nets produced by logic synthesis or derived from schematics act as 13 placeholders for CTS-produced clock nets. Each of the logic synthesized clock nets drives a 14 high drive strength buffer (an ideal clock). Each of the CTS-produced clock nets includes one or more levels of buffers, interconnect wires, and other gating logic such as clock_and, clock_or, 16 clock_mux, and other similar clock inanipulation elements. In some embodiments CTS is run 17 post placement so that precise coordinates of clock pins driven by each clock.net are known 18 (such as portions of processing performed in conjunction with "Buffering Clock Tree Synthesis 19 Timing Driven Buffering/Resizing" 821 of Fig. 8A).
21 [0524] In some implementations a CTS tool builds a clock network that strives to 22 optimize characteristics of the clock network including skew and latency.
Clock skew is the 23 difference of signal arrival times at clock pins of two registers. The CTS
tool optimizes a 24 maximum clock skew of the circuit, i.e. the largest clock skew between any pair of registers that have timing paths (setup/hold) between them is minimized.

27 [0525] Clock latency is delay from a root of a clock tree to a clock input pin of a 28 register. The CTS tool optimizes the maximum latency, i.e. the largest delay is minimized. In 29 addition to skew and latency metrics, there are other considerations such as power and routing congestion addressed by the CTS tool. The CTS tool attempts to optimize (i.e.
minimize) the 31 buffers and wire resources used for clock distribution since the resources directly impact circuit 32 routing congestion and dynamic power usage.

34 [0526] In some embodiments CTS is performed in a post detail placement phase to enable building optimized clock networks, based on locations of clock leaf pins. Gating logic 36 enables power reduction by selectively turning on and off sub-trees of a clock tree. Clock 1 selector logic (such as using a clock_mux) multiplexes multiple user clocks and test clocks. A
2 clock tree may have several levels of clock selector logic gates and several levels of clock gating 3 logic gates. In some usage scenarios clock gating circuitry is pre-designed by the user at a 4 hardware description level and is then synthesized into gates by a synthesis tool. The CTS tool balances clock networks while taking.into consideration delays of various gating logic, thus 6 treating the gating logic transparently and automatically.

8 [05271 Fig. 34A illustrates an example clock tree suitable for input to a CTS tool for 9 SAF-based design flows. Primary clock domains are illustrated as pclkO and pclkl. Gated clock sub-domains are illustrated as gclk0 and gclkl. A clock selector based clock sub-domain is 11 illustrated as mclk. Clocks pins of registers are illustrated as ckpO, ckpl, ... ckpn; ckg0, ...
12 . ckgn; cksO, cksl, ... cksn; and cksg0, ... cksgn. Register clock pins ckgO, ... ckgn and cksg0, ..
13 cksgn are associated with gated clocks. Register pins cks0, cksl, ... cksn are associated with 14 selected clocks. Register clock pins cksg0, ... cksgn are associated with two levels of clock processing (select and gate functions).

17 [05281 Fig. 34B illustrates an example clock tree output from the CTS tool operating on 18 the input illustrated in Fig. 34A. In the illustrated output various Clock Networks produced by 19 the CTS tool (according to the input illustrated by Fig. 34A) are shown driving the register clock pins.

22 105291 Fig. 34C illustrates an example clock tree network.. Leaf buffers are illustrated 23 as bl, b2, b3, and b4. Each of the buffers are shown driving (or fanning out to) a respective sea of 24 clock pins as illustrated conceptually by the, triangular element at each respective driver output.
Terminals of the clock network are illustrated as t,, t2, and t3. Selected terminal buffers are 26 illustrated as tbi and tb2. A clock root is illustrated as CT. The illustrated clock tree network is 27 representative of some implementations of the Clock Networks of Fig. 34B.
For example, 28 consider the Clock Network of Fig. 34B driving register clock pins ckpO, ckpl, ... ckpn. CT of.
29 Fig. 34C corresponds to the element driving pclk0. Leaf buffer bi drives ckp0, leaf buffer b2 drives ckpl, and so forth.

32 105301 Fig. 35 illustrates an overview of an embodiment of a CTS flow.
According to 33 various embodiments the CTS flow includes any combination of floorplan driven clock 34 partitioning, topological clock sorting, top-down recursive bi-partitioning, clock domain (and sub-domain) processing, bottom-up skew minimization, and top-down buffer placement.

1 [0531] Floorplan driven clock partitioning (such as illustrated in Fig. 35) may be used 2 when a die floorplan has extensive arrays of RAM and/or IP structures that lack suitable sites or 3 slots for clock tree buffer elements. When the CTS tool builds a clock tree, buffer sites at 4 intermediate points of each clock network are used to drive two sub-trees "underneath" the respective intermediate point. Having large rows(columns) of RAMs/IP blocks implies that 6 there are extensive die regions that are either completely devoid of clock buffer sites or have the 7 sites at sub-optimal locations. Therefore, CTS preprocesses the clock network and embeds 8 Pseudo-clock Sub-Domains (PSDs) that are first balanced within each row(column).
9 Subsequently, the clock sub-domains are deskewed across logic rows(columns).
The first level PSDs can be deskewed by buffer resources within a row(column), thus alleviating the need to 11 find sites over RAM and/or IP regions.

13 [0532] Fig. 36A illustrates an example die floorplan of a design having embedded 14 RAM or other IP blocks. Regions 36,300a represent an I/O ring. Regions 36,300b1, 36,300b2, and 36,300b3 represent rows of embedded RAMs. Regions 36,300c1, 36,300c2, and 36,300c3 16 represent rows of logic blocks. CTS clock preprocessing proceeds as follows. Within each 17 PSD, all clock leaf pins in each contiguous logic region (such as each of regions 36,300c1, 18 36,300c2, and 36,300c3) are merged so the leaf pins fan out from a single Root Clock 19 row(column) Buffer (RCB). The RCB is optimally placed at the centroid of the bounding-box encompassed by all the leaf clock pins within the respective logic region.

22 [0533] All RAM clock pins are then combined with logic clock pins by drawing a 23 partitioning line through the middle of each RAM region. For example, if there are RAM clock 24 pins in region 36,300b2, then each one is merged with clock pins of one of adjacent regions 36,300c1 or 36,300c2 depending on proximity of the respective RAM clock pin to the adjacent 26 regions (i.e. the closest one of the regions is chosen).

28 [0534] Then each of the region PSDs are deskewed individually. In some usage 29 scenarios the deskewing is by combining even and odd row(column) RCBs separately. In other words, every other row(column) is processed together. In situations where RAM
(or IP) 31 rows(columns) are alternated with logic block rows(columns), and the rows(columns) are of 32 approximately the same size, processing even/odd rows(columns) separately simplifies 33 equidistant placement of RCB pairs, since the center of each RCB pair will be in a logic block 34 row(column). For example, RCBs associated with region 36,300c1 are processed with RCBs associated with region 36,300c3, and equidistant placement may be satisfied by region 36,300c2, 36 a logic region. Note that the RCBs associated with a logic region may include RAM clock pins I from adjacent RAM regions, such as region 36,300c1 RCBs include merge RAM
clock pins 2 from the upper half of region 36,300b2 and the lower half of region 36,300b1.

4 [0535] Subsequently, the even and odd RCBs are deskewed at the clock root.
The aforementioned merging, partitioning, and RCB placement processing is performed for each 6 primary clock. The leaf clock pins driven by gated-clocks and clock selectors cells are treated 7 transparently during the processing. If a gated-clock or clock-selector logic drives leaf clock 8 pins iri multiple logic regions, then the gating logic is replicated in each of the respective regions 9 the gated clock fans out to, thus enabling transparent merging of un-gated and gated-clock leaf pins.

12 [0536] Fig. 36B illustrates a portion of a clock net in a context of a portion of Fig. 36A.
13 Clock net "clk" feeds both un-gated.and gated clock pins that span out to logic regions 36,300c1 14 and 36,300c2. The gated clock is replicated in region 36,300c2 so that the RCB in each region is enabled to independently drive both the un-gated and the gated branches of the clock trees. The 16 replication technique reduces multi-level clock balancing across RAM
regions and introduction 17 of skew uncertainties.

19 [05371 Topological clock sorting, or domain ordering (such as illustrated in Fig. 35) is performed so that the CTS tool visits the clock domains in an order that facilitates deskewing of 21 lower level sub-domains prior to higher level sub-domains. In some embodiments various clock 22 sorting functions are performed by software routines implemented in a topological sorter.

24 [0538] In some usage scenarios a primary clock has several gated=clock and select-clock logic based sub-domains. As shown in Fig. 34A, main clock (clk) fans out to several leaf 26 level clock pins after several levels of gating (gclkO, mclk, and gclkl).

28 105391 The sub-domains gclkO, mclk, and gclkl carry the same primary clock (clk), but 29 are gated (controlled) by user logic to enable selectively turning off for one or more clock cycles. Clock distribution information of Fig. 34A is processed by the topological sorter to 31 produce sub-domain ordering: gclkl -> mclk -> gclkO -> pclkO -> pclkl ->
clk. The ordering 32 ensures that when the un-gated leaf level pins of clk nets are being deskewed with the gated-33 clock pin (gclk0), the gated clock pin has already been completely processed (expanded) and any 34 associated clock network latency is determined.

1 [0540] Clock domains(and sub-domain) processing (such as illustrated in Fig.
35) 2 includes processing the domains according to the topological domain ordering. A Clock 3 Domain Processor (CDP) of the CTS tool first collects all clock pins to be deskewed. A user 4 may mark pins to be excluded for deskewing and the CDP obeys the marking.
The CDP forms two level clusters. For all the leaf clock pins that are pins of a leaf level register (such as 6 flipflops, latches, and RAM blocks), recursive partitioning forms bottom-up clusters that may be 7 driven by a leaf level clock buffer.

9 105411 Clustering of leaf level clock pins (such as illustrated in Fig. 35) is performed via recursive partitioning of all the leaf level clock pins, and forms small well-formed clock pin 11 clusters that may be driven by leaf level clock buffers, thus reducing complexity of leaf level 12 clock buffer implementation. The partitioning uses recursive bipartitioning with an objective 13 function that minimizes the diameter of the polygon formed by all pins in a partition.

105421 As the diameter of the polygon computation has polynomial complexity, in 16 some implementations a fast heuristic technique with linear complexity is used. The linear 17 complexity technique computes an NSP of a bounding box of all leaf level pins in a partition.
18 Clusters are also formed to increase relative "closeness" to other clusters having common setup 19 and hold paths. Cluster closeness of two clusters is the number of clock buffers common to the clusters. In other words, tightly coupled leaf clock pins are grouped to share relatively many.
21 clock buffers, thus enabling more efficient skew reduction.

24 [0543] Fig. 37A illustrates an example of timing driven pin swapping. As illustrated, it is preferable to partition clusters as P1 ={La, Ca}, P2 ={Lb, Cb) instead of P1 ={La, Lb } and 26 P2 ={Ca, Cb}. The former promotes sharing of clock buffers between launch and capture flip-27 flops thereby reducing the skew between launch and capture flip-flops since unshared clock 28 buffers may be subject to separate process, voltage, and temperature variations and thus may 29 introduce skew.
31 [05441 During recursive bipartitioning, each partition is scored based on timing 32 relationships between each pin and every other pin of the partition.
Cluster cost is a weighted 33 sum of interconnect wiring cost and cluster-closeness cost. The interconnect wiring cost is 34 determined from the NSP of the bounding box of all the pins constituting the cluster. For example, partition cost may be given by:
36 Part cost 0.5 * cic*cic + 0.5 * ctc*ctc 1 where 2 cic: is cluster interconnect cost, given by 3 cic I 1- part_interconnect_cost / best interconnect cost ); and 4 ctc is cluster timing cost given by ctc I 1- part_timing_cost / best_timing_cost ).

7 105451 Additionally, pairwise swapping of edge pins based on timing relationships of 8 the pins within the cluster is performed. The swapping is directed to achieve maximal common 9 launch and capture paths for a pair of clock pins that have either a setup path or a hold path in common.

12 [0546] Fig. 37B illustrates an example of effects of (top-down) clock tree partitioning.
13 A random distribution of clock pins is illustrated in the upper portion of the figure. Results of 14 clock tree partitioning and cluster formation are illustrated in the lower portion of the figure.
The CDP performs top-down partitioning using leaf-level buffer input pins and any clock sub-16 domain clock pins. Clock sub-domain clock input pins include input pins of gated clock cells, 17 clock selector cells, and derived clock pins of flip-flops. The clock sub-domains are processed 18 top-down instead of being clustered with leaf level clock pins, thus enabling insertion delay of 19 the clock sub-domain to be utilized to balance the sub-domains. As illustrated, results of a first recursive partitioning pass are shown as 37,100. Results of a pair of (independent) second 21 recursive partitioning passes are shown as 37,200a and 37,200b. Results of a third recursive 22 partitioning pass are shown as 37,300b. Note that although the recursive portioning results are 23 illustrated as straight cut-lines splitting contiguous regions, various embodiments and starting 24 conditions may result in cut-lines of any shape or form, such as zig-zags, curves, and so forth.
Further note that the split regions may be non-contiguous; i.e., form one or more "islands" driven 26 by a single leaf-level buffer.

28 [05471 Fig. 38 illustrates an analysis according to an embodiment of clock domain and 29 sub-domain partitioning.. A clock "Root" is illustrated with relationships to leaf buffers lbl, 1b2, 1b3, lb4, 1b5,1b6, and W. A tree of clock terminals is illustrated by t,, t2, t3, t4, t5, tb, and t7. In 31 some embodiments edges are added to represent timing relationships (such as setup and hold 32 times) between leaf level buffers. One type of timing relationship between first and second 33 buffers is when the first buffer drives a first storage element, the second buffer drives a second 34 storage element, and the two storage elements are coupled via a path having a setup (or hold) .35 timing constraint. An example setup(hold) timing relationship between a flip-flop driven by Ibl 36 and a flip-flop driven by lb4 is represented conceptually as dashed-line 38,100. As illustrated, 1 skew is minimized betWeen the two flip-flops by driving lbl and 1b4 via the same clock terminal 2 (t,).

4 [0548] The CDP creates distinct clock domains for the following types of clock nets:
= Primary clock nets;
6 = Clock nets driven by gated clock cells;
7 = Clock nets driven by clock selector cells;
8 = Pseudo clock domains (if floorplan driven clock partitioning has been performed); and 9 = Derived clock nets.
11 [0549] Timing relationships between the leaf level buffers are used to create optimum 12 timing driven partitions. A scoring function for a partition is a function of interconnect cost and 13 timing cost. To determine setup/hold timing relationships between leaf level buffers, an abstract 14 graph is used as illustrated in the figure, having edges between two leaf level buffers if a setup/hold path exists between elements driven by the two leaf level buffers.
The weight of the 16 edge is the number of setup/hold paths between the two leaf level buffers.

18 [0550] As a result of top-down partitioning, the clock tree has two types of nodes, 19 terminals and paths. A terminal represents a graph node that is processed by downstream modules for skew minimization. Each of the terminals has a pair of paths that represent the 21 respective buffer path from the respective parent terminal to the respective pair of child 22 'terminals.

24 105511 Clock domain edges are analyzed so that clock clusters recursively propagate the clock edge (e.g. a rising edge or a falling edge) used by the clock pin clusters at leaf level.
26 Thus only one of rise time or fall time is propagated for all intervening levels of logic cells 27 (including buffers and non-buffers).

29. [0552] During skew minimization (such as illustrated in Fig. 35) each internal terminal of a clock network is analyzed in a bottom-up visitation order and an ideal delay for each 31 respective buffer pair is determined that will minimize the skew of the terminal. Skew 32 minimization uses a successive approximation approach to determine types of buffer(s) and 33 interconnect lengths associated with each of the buffers.

[05531 During a first pass skew optimization (or minimization), a default input 36 transition time is used to compute delays of all clock buffers. For each terminal, respective I locations of buffer pairs to be placed are determined that would minimize skew for an entire sub-2 tree. If the skew cannot be minimized by placing the buffer pair between two child terminals, 3 then an amount of meandering interconnect/buffers to minimize the skew is determined..

[0554] An iterative skew improver portion of the CTS tool performs multi-pass skew 6 computation and idealized delay allocation for each stage of a buffer tree.
The skew improver 7 performs.a multi-pass optimization because skew minimization is done bottom-up but input 8 transition is propagated top-down. Therefore during the first pass, a skew minimizer uses a 9 default input transition for each buffer stage of a clock network and performs skew minimization at each level. Subsequently, a clock network timing update is performed that updates transition 11 times at each level, top-down, using an estimated output load on each of the buffers of the.
12 network.

14 [0555] A second pass of skew minimization is performed that uses the propagated input transition time at each of the clock buffers. Subsequent passes are performed (such as 1, 2, 3; 4, 16 or 5 iterations) seeking convergence of the skew minimizer.

18 [0556] Clock network timing is updated in accordance with buffer placement,.delays of 19 buffer gates, and interconnect delays. Since cell delays are functions of input transition time and phase, the clock network timing (arrival time and transition time) is relatively accurate to ensure 21 that buffer cell delays computed at each of the terminals matches closely with post-routed-22 extracted delays.

24 105571 Transition time at an output of a gate is a function of input transition time at the gate and effective output load (Ceff) driven by the gate. The proper phase of transition times is 26 propagated down a clock network to accurately estimate transition times and cell delays at a next 27 level of the clock network. In some usage scenarios (such as an SAF-based design flow) buffers 28 may not be placed at ideal locations (i.e. there is no logic block in a proper buffer position).
29 Thus clock buffer placement is performed iteratively. Whenever a buffer is placed at a somewhat non-ideal location, the effect of that buffer placement is propagated throughout the 31 clock sub-tree.

33 105581 A buffer placer module of the CTS tool inserts a pair of buffers at each terminal 34 of a clock network. Unlike standard cell design flows where a buffer may be placed anywhere in a row of standard cell logic, structured ASICs are constrained in where buffer resources may 36 be placed. .

2 [0559] Buffer placement is performed recursively down the clock tree. At each 3 terminal, the buffer placer evaluates a finite number of buffer pair sites for suitability as a buffer 4 pair of the respective terminal. The buffer pairs are located by using a search window around an ideal buffer pair location.

7 [0560] The buffer placer uses a speculative scoring function to score each pair of 8 buffers. Each buffer pair is scored on the basis of the objective function:
9 buf_pair_cost = .9 * buf delay_cost + .1 * buf dist_cost;
where 11 buf delay_cost = ddO*ddO + ddl *ddl + dd2*dd2;
12 where 13 dd0 = ( 1- est delay/ideal_delay ) for the respective parent terminal;
14 ddl = ( 1- est delay/ideal_delay ) for the respective left terminal; and dd2 = ( 1- est delay/ideal_delay ) for the respective right terminal.

17 [0561] Similarly, 18 buf dist cost = dbb*dbb + msdl *msdl + msd2*msd2; and 19 dbb = manhattan distance between the pair of buffer.
Ideally the pair of buffers should be as close as possible to reduce any delay uncertainty between 21 a parent buffer and respective buffer pairs. Using a dbb term penalizes any pair of buffers that 22 are far apart.
23 msdl(2) = distance between left/right buffer and merging segment.
24 A merging segment is a line that goes between a pair of idealized buffer locations. The distance of the buffer location and the merge segment are measured. The idealized buffer locations for 26 the downstream sub-tree are computed with the parent buffer being ideally placed on the 27 merging segment. If actual placement of the buffer deviates too much from the idealized line 28 segment then the estimates for the downstream terminal are no longer valid.

[0562] When two sub-trees have considerable differences in accumulated insertion 31 delays then delay buffers are inserted to match insertion delay at a parent terminal. Differences 32 in insertion delays may occur in some usage scenarios where one branch of the clock sub-tree is 33 a (relatively large) gated-clock sub-domain and remaining branches are relatively smaller gated 34 or un-gated clock-sub-domains.

1 105631 Delay buffers are scored using an objective scoring function:
2 delay_buf cost = 0.70 * dcost*dcost + 0.2 * ncost*ncost + 0.1 * pcost*pcost;
3 where 4 dcost = ( 1 - ( accum_delay + incr_delay )/ideal_delay );
ncost = ( 1- actual_length/ideal_length ); and 6 pcost = ( 1- path_remaining_length / path_ideal_remaining_length ).

8 [0564] Besides the delay cost (which has the highest weighting), delay_buf cost uses 9 two other metrics to evaluate a candidate delay buffer. Ncost factors in any deviation from ideal length of an interconnect for a respective path, and pcost factors in deviation of path length from 11 a respective ideal path length.

13 [0565] If the skew minimizer determines that the path requires some amount of 14 meandering interconnect to add extra delay at the buffer, then a dummy-load insertion technique is used to implement the incremental meandering wire resource. A dummy load inserter portion 16 of the CTS tool searches for optimal dummy load sites (typically a low drive strength inverter) 17 on an SAF-based chip and connects the buffer to the dummy load.

19 [0566] The CTS tool balances for max and min corners simultaneously as optimum skew for a max corner is not the optimum skew for min corner. In some usage scenarios skew at 21 the max corner typically affects the setup timing paths whereas clock skew for the min corner 22 affects the hold time paths. During deskewing monitored by the CTS tool, timing for both max 23 and min corners (also known as mixed mode) is considered, and the CTS tool uses scoring 24 functions (as described elsewhere herein) that uses a weighted sum of max and min scoring functions.

27 [0567] Post-routed-extracted parasitics are used to perform clock tree optimization.
28 The clock optimization is used to achieve timing closure in designs having correlation issues 29 with predicted clock skew and post-extracted clock skew. In some usage scenarios, the CTS tool achieves a high degree of correlation with post-extracted skew using several techniques as 31 described elsewhere herein. The CTS tool performs several clock tree optimizations, such as 32 replacement of a clock gating cell, replacement of terminal buffers, dummy load insertion, and 33 swapping a CTS buffer for some other morphable element that may be implemented as a buffer.

CONCLUSION
105681 Certain choices have been made in the description merely for convenience in preparing the text and drawings and unless there is an indication to the contrary the choices should not be construed per se as conveying additional information regarding structure or operation of the embodiments described. Examples of the choices include: the particular organization or assignment of the designations used for the figure numbering and the particular organization or assignment of the element identifiers (i.e., the callouts or numerical designators) used to identify and reference the features and elements of the embodiments.
105691 Although the foregoing einbodiments have been described in some detail for purposes of clarity of description and understanding, the invention is not limited to the details provided. There are many embodiments of the invention. The disclosed embodiments are exemplary and not restrictive.
[0570] It will be understood that many variations in construction, arrangement, and use are possible, which are consistent with the description and are within the scope of the claims of the issued patent. For example, interconnect and function-unit bit-widths, clock speeds, and the type of technology used are variable according to various embodiments in each component.
block. The names given to interconnect and logic are merely exemplary, and should not be construed as limiting the concepts described. The order and arrangement of flowchart and flow diagram process, action, and function elements are variable according to various embodiments.
Also, unless specifically stated to the contrary, value ranges specified, maximum and minimum values used, or other particular specifications (such as integration techniques and design flow technologies), are.merely those of the described embodiments, are expected to track improvements and changes in implementation technology, and should not be construed as limitations.

[05711 Functionally equivalent techniques known in the art are employable instead of those described to implement various components, sub-systems, functions, operations, routines, and sub-routines. It is also understood that many functional aspects of embodiments are realizable selectively in either hardware (i.e., generally dedicated circuitry) or software (i.e., via some manner of programmed controller or processor), as a function of embodiment dependent design constraints and technology trends of faster processing (facilitating migration of functions previously in hardware into software) and higher integration density (facilitating migration of functions previously in software into hardware). Specific variations in various embodiments include, but are not limited to: differences in partitioning; different form factors and configurations; use of different operating systems and other system software;
use of different interface standards, network protocols, or communication links; and other variations to be expected when implementing the concepts described herein in accordance with the unique engineering and business constraints of a particular application.

[0572] The embodiments have been described with detail and environmental context well beyond that required for a minimal implementation of many aspects of the embodiments described. Those of ordinary skill in the art will recognize that some embodiments omit disclosed components or features without altering the basic cooperation among the remaining elements. It is thus understood that much of the details disclosed are not required to implement various aspects of the embodiments described. To the extent that the remaining elements are distinguishable from the prior art, components and features that are omitted are not limiting on the concepts described herein.
105731 All such variations in design comprise insubstantial changes over the teachings conveyed by the described embodiments. It is also understood that the embodiments described herein have broad applicability to other computing and networking applications, and are not limited to the particular application or industry of the described embodiments. The invention is thus to be construed as including all possible modifications and variations encompassed within the scope of the claims of the issued patent.

Referenced by
Citing PatentFiling datePublication dateApplicantTitle
CN101931973A *22 Jun 201029 Dec 2010宏达国际电子股份有限公司Method of enhancing positioning measurement and related communication device
Classifications
International ClassificationH04W24/00, H04B7/185, H04B7/00, H04B17/00, H04W24/08, G06F15/16, G08B5/22, G06G7/48, H04M11/00, G01S3/02, H04Q11/00, H04L12/50, G06F17/50, H04M3/00
Cooperative ClassificationG01S5/0054, H04W24/00, H04W24/08, G01S5/0036
European ClassificationH04W24/00, H04W24/08, G01S5/00R1B, G01S5/00R2A
Legal Events
DateCodeEventDescription
16 Feb 2009EEERExamination request