powerpc updates for 6.15

- Removal of support for IBM Cell Blades
 
  - SMP support for microwatt platform
 
  - Support for inline static calls on PPC32
 
  - Enable pmu selftests for power11 platform
 
  - Enable hardware trace macro (HTM) hcall support
 
  - Support for limited address mode capability
 
  - Changes to RMA size from 512 MB to 768 MB to handle fadump
 
  - Misc fixes and cleanups
 
 Thanks to: Abhishek Dubey, Amit Machhiwal, Andreas Schwab, Arnd Bergmann,
 Athira Rajeev, Avnish Chouhan, Christophe Leroy, Disha Goel, Donet Tom, Gaurav
 Batra, Gautam Menghani, Hari Bathini, Kajol Jain, Kees Cook, Mahesh Salgaonkar,
 Michael Ellerman, Paul Mackerras, Ritesh Harjani (IBM), Sathvika Vasireddy,
 Segher Boessenkool, Sourabh Jain, Vaibhav Jain, Venkat Rao Bagalkote.
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEqX2DNAOgU8sBX3pRpnEsdPSHZJQFAmfjZnYACgkQpnEsdPSH
 ZJTmKg/+NGW7wyFY9d8Iai9ncYY7GSzsMSDTaan7qg0QWOd5gjHbsdbava7TM/DW
 8p9XsC+17kSeftRNUjtc52bSN8Ei2gBdsXIagQG1alfB2X2e6wkNauifK+dz3Su6
 usMEZZTO5R/jFDotFXNM1nsUj+8dvjnPgOUrji/P8k7PT5295wpza0hz1fy5SrOA
 hM5cliBP36UgFe5Efvgm4OUX2gQIhbc3stt9MVfymW/k0Mit5f41UIPuVGiTWowY
 s0cUJGkhxUlGXT3VfOVKuZfn4u9KMha7UCl9afSceJzXOdnUIKIbskui1VEv6cD/
 iSIxi839uErAobFHlsLYprgYFciYLII3xe2qNZCA/ZxeIMS/Mm6xokESeWLhBnfa
 P7ke6l0z3GDtTvgI2eSeU9BdrVveF1NgbP9GYSKgT6gtw/kRRnxgHF8tzmLON5PT
 KXpQlzz8VuSBRtF2jnLFU89+FFwSA1bRUhDrp89HyYFqw1B5g4N7kFFTUJWHOuKS
 fwPGy+cveKehmCUBedeTRFqHvvqdwpD/WnPlQzCly3WxqdL8U/eTXYftMiAwuK28
 ovLuSs3vRThKRQ8DnUa5oB0UGsjMpRV5LdvYkhw+x8mZKUR59oj4fx2ae4TtPakg
 dbAYuPPkCORdaSga/nV6vQgsLprFpcGX3dq6E19+BVBAY5D+1PE=
 =GFcj
 -----END PGP SIGNATURE-----

Merge tag 'powerpc-6.15-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux

Pull powerpc updates from Madhavan Srinivasan:

 - Remove support for IBM Cell Blades

 - SMP support for microwatt platform

 - Support for inline static calls on PPC32

 - Enable pmu selftests for power11 platform

 - Enable hardware trace macro (HTM) hcall support

 - Support for limited address mode capability

 - Changes to RMA size from 512 MB to 768 MB to handle fadump

 - Misc fixes and cleanups

Thanks to Abhishek Dubey, Amit Machhiwal, Andreas Schwab, Arnd Bergmann,
Athira Rajeev, Avnish Chouhan, Christophe Leroy, Disha Goel, Donet Tom,
Gaurav Batra, Gautam Menghani, Hari Bathini, Kajol Jain, Kees Cook,
Mahesh Salgaonkar, Michael Ellerman, Paul Mackerras, Ritesh Harjani
(IBM), Sathvika Vasireddy, Segher Boessenkool, Sourabh Jain, Vaibhav
Jain, and Venkat Rao Bagalkote.

* tag 'powerpc-6.15-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (61 commits)
  powerpc/kexec: fix physical address calculation in clear_utlb_entry()
  crypto: powerpc: Mark ghashp8-ppc.o as an OBJECT_FILES_NON_STANDARD
  powerpc: Fix 'intra_function_call not a direct call' warning
  powerpc/perf: Fix ref-counting on the PMU 'vpa_pmu'
  KVM: PPC: Enable CAP_SPAPR_TCE_VFIO on pSeries KVM guests
  powerpc/prom_init: Fixup missing #size-cells on PowerBook6,7
  powerpc/microwatt: Add SMP support
  powerpc: Define config option for processors with broadcast TLBIE
  powerpc/microwatt: Define an idle power-save function
  powerpc/microwatt: Device-tree updates
  powerpc/microwatt: Select COMMON_CLK in order to get the clock framework
  net: toshiba: Remove reference to PPC_IBM_CELL_BLADE
  net: spider_net: Remove powerpc Cell driver
  cpufreq: ppc_cbe: Remove powerpc Cell driver
  genirq: Remove IRQ_EDGE_EOI_HANDLER
  docs: Remove reference to removed CBE_CPUFREQ_SPU_GOVERNOR
  powerpc: Remove UDBG_RTAS_CONSOLE
  powerpc/io: Use standard barrier macros in io.c
  powerpc/io: Rename _insw_ns() etc.
  powerpc/io: Use generic raw accessors
  ...
This commit is contained in:
Linus Torvalds 2025-03-27 19:39:08 -07:00
commit 7b667acd69
142 changed files with 1043 additions and 12714 deletions

11
CREDITS

@ -2187,6 +2187,10 @@ D: Various ACPI fixes, keeping correct battery state through suspend
D: various lockdep annotations, autofs and other random bugfixes
S: Prague, Czech Republic
N: Ishizaki Kou
E: kou.ishizaki@toshiba.co.jp
D: Spidernet driver for PowerPC Cell platforms
N: Gene Kozin
E: 74604.152@compuserve.com
W: https://www.sangoma.com
@ -2197,6 +2201,9 @@ S: Markham, Ontario
S: L3R 8B2
S: Canada
N: Christian Krafft
D: PowerPC Cell support
N: Maxim Krasnyansky
E: maxk@qualcomm.com
W: http://vtun.sf.net
@ -2389,6 +2396,10 @@ S: ICP vortex GmbH
S: Neckarsulm
S: Germany
N: Geoff Levand
E: geoff@infradead.org
D: Spidernet driver for PowerPC Cell platforms
N: Phil Lewis
E: beans@bucket.ualr.edu
D: Promised to send money if I would put his name in the source tree.

@ -55,4 +55,5 @@ Date: May 2024
Contact: linuxppc-dev@lists.ozlabs.org
Description: read/write
This is a special sysfs file available to setup additional
parameters to be passed to capture kernel.
parameters to be passed to capture kernel. For HASH MMU it
is exported only if RMA size higher than 768MB.

@ -278,12 +278,7 @@ To reduce its OS jitter, do any of the following:
due to the rtas_event_scan() function.
WARNING: Please check your CPU specifications to
make sure that this is safe on your particular system.
e. If running on Cell Processor, build your kernel with
CBE_CPUFREQ_SPU_GOVERNOR=n to avoid OS jitter from
spu_gov_work().
WARNING: Please check your CPU specifications to
make sure that this is safe on your particular system.
f. If running on PowerMAC, build your kernel with
e. If running on PowerMAC, build your kernel with
CONFIG_PMAC_RACKMETER=n to disable the CPU-meter,
avoiding OS jitter from rackmeter_do_timer().

@ -120,6 +120,28 @@ to ensure that crash data is preserved to process later.
e.g.
# echo 1 > /sys/firmware/opal/mpipl/release_core
-- Support for Additional Kernel Arguments in Fadump
Fadump has a feature that allows passing additional kernel arguments
to the fadump kernel. This feature was primarily designed to disable
kernel functionalities that are not required for the fadump kernel
and to reduce its memory footprint while collecting the dump.
Command to Add Additional Kernel Parameters to Fadump:
e.g.
# echo "nr_cpus=16" > /sys/kernel/fadump/bootargs_append
The above command is sufficient to add additional arguments to fadump.
An explicit service restart is not required.
Command to Retrieve the Additional Fadump Arguments:
e.g.
# cat /sys/kernel/fadump/bootargs_append
Note: Additional kernel arguments for fadump with HASH MMU is only
supported if the RMA size is greater than 768 MB. If the RMA
size is less than 768 MB, the kernel does not export the
/sys/kernel/fadump/bootargs_append sysfs node.
Implementation details:
-----------------------

@ -289,6 +289,17 @@ to be issued multiple times in order to be completely serviced. The
subsequent hcalls to the hypervisor until the hcall is completely serviced
at which point H_SUCCESS or other error is returned by the hypervisor.
**H_HTM**
| Input: flags, target, operation (op), op-param1, op-param2, op-param3
| Out: *dumphtmbufferdata*
| Return Value: *H_Success,H_Busy,H_LongBusyOrder,H_Partial,H_Parameter,
H_P2,H_P3,H_P4,H_P5,H_P6,H_State,H_Not_Available,H_Authority*
H_HTM supports setup, configuration, control and dumping of Hardware Trace
Macro (HTM) function and its data. HTM buffer stores tracing data for functions
like core instruction, core LLAT and nest.
References
==========
.. [1] "Power Architecture Platform Reference"

@ -55,7 +55,6 @@ Contents:
ti/cpsw_switchdev
ti/am65_nuss_cpsw_switchdev
ti/tlan
toshiba/spider_net
wangxun/txgbe
wangxun/ngbe

@ -1,202 +0,0 @@
.. SPDX-License-Identifier: GPL-2.0
===========================
The Spidernet Device Driver
===========================
Written by Linas Vepstas <linas@austin.ibm.com>
Version of 7 June 2007
Abstract
========
This document sketches the structure of portions of the spidernet
device driver in the Linux kernel tree. The spidernet is a gigabit
ethernet device built into the Toshiba southbridge commonly used
in the SONY Playstation 3 and the IBM QS20 Cell blade.
The Structure of the RX Ring.
=============================
The receive (RX) ring is a circular linked list of RX descriptors,
together with three pointers into the ring that are used to manage its
contents.
The elements of the ring are called "descriptors" or "descrs"; they
describe the received data. This includes a pointer to a buffer
containing the received data, the buffer size, and various status bits.
There are three primary states that a descriptor can be in: "empty",
"full" and "not-in-use". An "empty" or "ready" descriptor is ready
to receive data from the hardware. A "full" descriptor has data in it,
and is waiting to be emptied and processed by the OS. A "not-in-use"
descriptor is neither empty or full; it is simply not ready. It may
not even have a data buffer in it, or is otherwise unusable.
During normal operation, on device startup, the OS (specifically, the
spidernet device driver) allocates a set of RX descriptors and RX
buffers. These are all marked "empty", ready to receive data. This
ring is handed off to the hardware, which sequentially fills in the
buffers, and marks them "full". The OS follows up, taking the full
buffers, processing them, and re-marking them empty.
This filling and emptying is managed by three pointers, the "head"
and "tail" pointers, managed by the OS, and a hardware current
descriptor pointer (GDACTDPA). The GDACTDPA points at the descr
currently being filled. When this descr is filled, the hardware
marks it full, and advances the GDACTDPA by one. Thus, when there is
flowing RX traffic, every descr behind it should be marked "full",
and everything in front of it should be "empty". If the hardware
discovers that the current descr is not empty, it will signal an
interrupt, and halt processing.
The tail pointer tails or trails the hardware pointer. When the
hardware is ahead, the tail pointer will be pointing at a "full"
descr. The OS will process this descr, and then mark it "not-in-use",
and advance the tail pointer. Thus, when there is flowing RX traffic,
all of the descrs in front of the tail pointer should be "full", and
all of those behind it should be "not-in-use". When RX traffic is not
flowing, then the tail pointer can catch up to the hardware pointer.
The OS will then note that the current tail is "empty", and halt
processing.
The head pointer (somewhat mis-named) follows after the tail pointer.
When traffic is flowing, then the head pointer will be pointing at
a "not-in-use" descr. The OS will perform various housekeeping duties
on this descr. This includes allocating a new data buffer and
dma-mapping it so as to make it visible to the hardware. The OS will
then mark the descr as "empty", ready to receive data. Thus, when there
is flowing RX traffic, everything in front of the head pointer should
be "not-in-use", and everything behind it should be "empty". If no
RX traffic is flowing, then the head pointer can catch up to the tail
pointer, at which point the OS will notice that the head descr is
"empty", and it will halt processing.
Thus, in an idle system, the GDACTDPA, tail and head pointers will
all be pointing at the same descr, which should be "empty". All of the
other descrs in the ring should be "empty" as well.
The show_rx_chain() routine will print out the locations of the
GDACTDPA, tail and head pointers. It will also summarize the contents
of the ring, starting at the tail pointer, and listing the status
of the descrs that follow.
A typical example of the output, for a nearly idle system, might be::
net eth1: Total number of descrs=256
net eth1: Chain tail located at descr=20
net eth1: Chain head is at 20
net eth1: HW curr desc (GDACTDPA) is at 21
net eth1: Have 1 descrs with stat=x40800101
net eth1: HW next desc (GDACNEXTDA) is at 22
net eth1: Last 255 descrs with stat=xa0800000
In the above, the hardware has filled in one descr, number 20. Both
head and tail are pointing at 20, because it has not yet been emptied.
Meanwhile, hw is pointing at 21, which is free.
The "Have nnn decrs" refers to the descr starting at the tail: in this
case, nnn=1 descr, starting at descr 20. The "Last nnn descrs" refers
to all of the rest of the descrs, from the last status change. The "nnn"
is a count of how many descrs have exactly the same status.
The status x4... corresponds to "full" and status xa... corresponds
to "empty". The actual value printed is RXCOMST_A.
In the device driver source code, a different set of names are
used for these same concepts, so that::
"empty" == SPIDER_NET_DESCR_CARDOWNED == 0xa
"full" == SPIDER_NET_DESCR_FRAME_END == 0x4
"not in use" == SPIDER_NET_DESCR_NOT_IN_USE == 0xf
The RX RAM full bug/feature
===========================
As long as the OS can empty out the RX buffers at a rate faster than
the hardware can fill them, there is no problem. If, for some reason,
the OS fails to empty the RX ring fast enough, the hardware GDACTDPA
pointer will catch up to the head, notice the not-empty condition,
ad stop. However, RX packets may still continue arriving on the wire.
The spidernet chip can save some limited number of these in local RAM.
When this local ram fills up, the spider chip will issue an interrupt
indicating this (GHIINT0STS will show ERRINT, and the GRMFLLINT bit
will be set in GHIINT1STS). When the RX ram full condition occurs,
a certain bug/feature is triggered that has to be specially handled.
This section describes the special handling for this condition.
When the OS finally has a chance to run, it will empty out the RX ring.
In particular, it will clear the descriptor on which the hardware had
stopped. However, once the hardware has decided that a certain
descriptor is invalid, it will not restart at that descriptor; instead
it will restart at the next descr. This potentially will lead to a
deadlock condition, as the tail pointer will be pointing at this descr,
which, from the OS point of view, is empty; the OS will be waiting for
this descr to be filled. However, the hardware has skipped this descr,
and is filling the next descrs. Since the OS doesn't see this, there
is a potential deadlock, with the OS waiting for one descr to fill,
while the hardware is waiting for a different set of descrs to become
empty.
A call to show_rx_chain() at this point indicates the nature of the
problem. A typical print when the network is hung shows the following::
net eth1: Spider RX RAM full, incoming packets might be discarded!
net eth1: Total number of descrs=256
net eth1: Chain tail located at descr=255
net eth1: Chain head is at 255
net eth1: HW curr desc (GDACTDPA) is at 0
net eth1: Have 1 descrs with stat=xa0800000
net eth1: HW next desc (GDACNEXTDA) is at 1
net eth1: Have 127 descrs with stat=x40800101
net eth1: Have 1 descrs with stat=x40800001
net eth1: Have 126 descrs with stat=x40800101
net eth1: Last 1 descrs with stat=xa0800000
Both the tail and head pointers are pointing at descr 255, which is
marked xa... which is "empty". Thus, from the OS point of view, there
is nothing to be done. In particular, there is the implicit assumption
that everything in front of the "empty" descr must surely also be empty,
as explained in the last section. The OS is waiting for descr 255 to
become non-empty, which, in this case, will never happen.
The HW pointer is at descr 0. This descr is marked 0x4.. or "full".
Since its already full, the hardware can do nothing more, and thus has
halted processing. Notice that descrs 0 through 254 are all marked
"full", while descr 254 and 255 are empty. (The "Last 1 descrs" is
descr 254, since tail was at 255.) Thus, the system is deadlocked,
and there can be no forward progress; the OS thinks there's nothing
to do, and the hardware has nowhere to put incoming data.
This bug/feature is worked around with the spider_net_resync_head_ptr()
routine. When the driver receives RX interrupts, but an examination
of the RX chain seems to show it is empty, then it is probable that
the hardware has skipped a descr or two (sometimes dozens under heavy
network conditions). The spider_net_resync_head_ptr() subroutine will
search the ring for the next full descr, and the driver will resume
operations there. Since this will leave "holes" in the ring, there
is also a spider_net_resync_tail_ptr() that will skip over such holes.
As of this writing, the spider_net_resync() strategy seems to work very
well, even under heavy network loads.
The TX ring
===========
The TX ring uses a low-watermark interrupt scheme to make sure that
the TX queue is appropriately serviced for large packet sizes.
For packet sizes greater than about 1KBytes, the kernel can fill
the TX ring quicker than the device can drain it. Once the ring
is full, the netdev is stopped. When there is room in the ring,
the netdev needs to be reawakened, so that more TX packets are placed
in the ring. The hardware can empty the ring about four times per jiffy,
so its not appropriate to wait for the poll routine to refill, since
the poll routine runs only once per jiffy. The low-watermark mechanism
marks a descr about 1/4th of the way from the bottom of the queue, so
that an interrupt is generated when the descr is processed. This
interrupt wakes up the netdev, which can then refill the queue.
For large packets, this mechanism generates a relatively small number
of interrupts, about 1K/sec. For smaller packets, this will drop to zero
interrupts, as the hardware can empty the queue faster than the kernel
can fill it.

@ -22526,15 +22526,6 @@ F: include/linux/spi/
F: include/uapi/linux/spi/
F: tools/spi/
SPIDERNET NETWORK DRIVER for CELL
M: Ishizaki Kou <kou.ishizaki@toshiba.co.jp>
M: Geoff Levand <geoff@infradead.org>
L: netdev@vger.kernel.org
L: linuxppc-dev@lists.ozlabs.org
S: Maintained
F: Documentation/networking/device_drivers/ethernet/toshiba/spider_net.rst
F: drivers/net/ethernet/toshiba/spider_net*
SPMI SUBSYSTEM
M: Stephen Boyd <sboyd@kernel.org>
L: linux-kernel@vger.kernel.org

@ -288,6 +288,7 @@ config PPC
select HAVE_STACKPROTECTOR if PPC32 && $(cc-option,$(m32-flag) -mstack-protector-guard=tls -mstack-protector-guard-reg=r2 -mstack-protector-guard-offset=0)
select HAVE_STACKPROTECTOR if PPC64 && $(cc-option,$(m64-flag) -mstack-protector-guard=tls -mstack-protector-guard-reg=r13 -mstack-protector-guard-offset=0)
select HAVE_STATIC_CALL if PPC32
select HAVE_STATIC_CALL_INLINE if PPC32
select HAVE_SYSCALL_TRACEPOINTS
select HAVE_VIRT_CPU_ACCOUNTING
select HAVE_VIRT_CPU_ACCOUNTING_GEN
@ -413,12 +414,9 @@ config ARCH_HAS_ADD_PAGES
config PPC_DCR_NATIVE
bool
config PPC_DCR_MMIO
bool
config PPC_DCR
bool
depends on PPC_DCR_NATIVE || PPC_DCR_MMIO
depends on PPC_DCR_NATIVE
default y
config PPC_PCI_OF_BUS_MAP
@ -444,11 +442,6 @@ config PPC_PCI_BUS_NUM_DOMAIN_DEPENDENT
PCI domain dependent and each PCI controller on own domain can have
256 PCI buses, like it is on other Linux architectures.
config PPC_OF_PLATFORM_PCI
bool
depends on PCI
depends on PPC64 # not supported on 32 bits yet
config ARCH_SUPPORTS_UPROBES
def_bool y

@ -216,13 +216,6 @@ config PPC_EARLY_DEBUG_RTAS_PANEL
help
Select this to enable early debugging via the RTAS panel.
config PPC_EARLY_DEBUG_RTAS_CONSOLE
bool "RTAS Console"
depends on PPC_RTAS
select UDBG_RTAS_CONSOLE
help
Select this to enable early debugging via the RTAS console.
config PPC_EARLY_DEBUG_PAS_REALMODE
bool "PA Semi real mode"
depends on PPC_PASEMI

@ -173,7 +173,6 @@ src-plat-$(CONFIG_PPC_PS3) += ps3-head.S ps3-hvcall.S ps3.c
src-plat-$(CONFIG_EPAPR_BOOT) += epapr.c epapr-wrapper.c
src-plat-$(CONFIG_PPC_PSERIES) += pseries-head.S
src-plat-$(CONFIG_PPC_POWERNV) += pseries-head.S
src-plat-$(CONFIG_PPC_IBM_CELL_BLADE) += pseries-head.S
src-plat-$(CONFIG_MVME7100) += motload-head.S mvme7100.c
src-plat-$(CONFIG_PPC_MICROWATT) += fixed-head.S microwatt.c
@ -276,7 +275,6 @@ quiet_cmd_wrap = WRAP $@
image-$(CONFIG_PPC_PSERIES) += zImage.pseries
image-$(CONFIG_PPC_POWERNV) += zImage.pseries
image-$(CONFIG_PPC_IBM_CELL_BLADE) += zImage.pseries
image-$(CONFIG_PPC_PS3) += dtbImage.ps3
image-$(CONFIG_PPC_CHRP) += zImage.chrp
image-$(CONFIG_PPC_EFIKA) += zImage.chrp

@ -1,4 +1,5 @@
/dts-v1/;
#include <dt-bindings/gpio/gpio.h>
/ {
#size-cells = <0x02>;
@ -8,6 +9,7 @@
aliases {
serial0 = &UART0;
ethernet = &enet0;
};
reserved-memory {
@ -35,40 +37,79 @@
ibm,powerpc-cpu-features {
display-name = "Microwatt";
isa = <3000>;
isa = <3010>;
device_type = "cpu-features";
compatible = "ibm,powerpc-cpu-features";
mmu-radix {
isa = <3000>;
usable-privilege = <2>;
usable-privilege = <6>;
os-support = <0>;
};
little-endian {
isa = <2050>;
usable-privilege = <3>;
isa = <0>;
usable-privilege = <7>;
os-support = <0>;
hwcap-bit-nr = <1>;
};
cache-inhibited-large-page {
isa = <2040>;
usable-privilege = <2>;
isa = <0>;
usable-privilege = <6>;
os-support = <0>;
};
fixed-point-v3 {
isa = <3000>;
usable-privilege = <3>;
usable-privilege = <7>;
};
no-execute {
isa = <2010>;
isa = <0x00>;
usable-privilege = <2>;
os-support = <0>;
};
floating-point {
hfscr-bit-nr = <0>;
hwcap-bit-nr = <27>;
isa = <0>;
usable-privilege = <3>;
usable-privilege = <7>;
hv-support = <1>;
os-support = <0>;
};
prefixed-instructions {
hfscr-bit-nr = <13>;
fscr-bit-nr = <13>;
isa = <3010>;
usable-privilege = <7>;
os-support = <1>;
hv-support = <1>;
};
tar {
hfscr-bit-nr = <8>;
fscr-bit-nr = <8>;
isa = <2070>;
usable-privilege = <7>;
os-support = <1>;
hv-support = <1>;
hwcap-bit-nr = <58>;
};
control-register {
isa = <0>;
usable-privilege = <7>;
};
system-call-vectored {
isa = <3000>;
usable-privilege = <7>;
os-support = <1>;
fscr-bit-nr = <12>;
hwcap-bit-nr = <52>;
};
};
@ -101,6 +142,36 @@
ibm,mmu-lpid-bits = <12>;
ibm,mmu-pid-bits = <20>;
};
PowerPC,Microwatt@1 {
i-cache-sets = <2>;
ibm,dec-bits = <64>;
reservation-granule-size = <64>;
clock-frequency = <100000000>;
timebase-frequency = <100000000>;
i-tlb-sets = <1>;
ibm,ppc-interrupt-server#s = <1>;
i-cache-block-size = <64>;
d-cache-block-size = <64>;
d-cache-sets = <2>;
i-tlb-size = <64>;
cpu-version = <0x990000>;
status = "okay";
i-cache-size = <0x1000>;
ibm,processor-radix-AP-encodings = <0x0c 0xa0000010 0x20000015 0x4000001e>;
tlb-size = <0>;
tlb-sets = <0>;
device_type = "cpu";
d-tlb-size = <128>;
d-tlb-sets = <2>;
reg = <1>;
general-purpose;
64-bit;
d-cache-size = <0x1000>;
ibm,chip-id = <0>;
ibm,mmu-lpid-bits = <12>;
ibm,mmu-pid-bits = <20>;
};
};
soc@c0000000 {
@ -113,8 +184,8 @@
interrupt-controller@4000 {
compatible = "openpower,xics-presentation", "ibm,ppc-xicp";
ibm,interrupt-server-ranges = <0x0 0x1>;
reg = <0x4000 0x100>;
ibm,interrupt-server-ranges = <0x0 0x2>;
reg = <0x4000 0x10 0x4010 0x10>;
};
ICS: interrupt-controller@5000 {
@ -138,7 +209,18 @@
interrupts = <0x10 0x1>;
};
ethernet@8020000 {
gpio: gpio@7000 {
device_type = "gpio";
compatible = "faraday,ftgpio010";
gpio-controller;
#gpio-cells = <2>;
reg = <0x7000 0x80>;
interrupts = <0x14 1>;
interrupt-controller;
#interrupt-cells = <2>;
};
enet0: ethernet@8020000 {
compatible = "litex,liteeth";
reg = <0x8021000 0x100
0x8020800 0x100
@ -160,7 +242,6 @@
reg-names = "phy", "core", "reader", "writer", "irq";
bus-width = <4>;
interrupts = <0x13 1>;
cap-sd-highspeed;
clocks = <&sys_clk>;
};
};

@ -25,7 +25,6 @@ CONFIG_PS3_DISK=y
CONFIG_PS3_ROM=m
CONFIG_PS3_FLASH=m
CONFIG_PS3_LPM=m
CONFIG_PPC_IBM_CELL_BLADE=y
CONFIG_RTAS_FLASH=y
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_GOV_POWERSAVE=y
@ -133,7 +132,6 @@ CONFIG_SKGE=m
CONFIG_SKY2=m
CONFIG_GELIC_NET=m
CONFIG_GELIC_WIRELESS=y
CONFIG_SPIDER_NET=y
# CONFIG_INPUT_KEYBOARD is not set
# CONFIG_INPUT_MOUSE is not set
# CONFIG_SERIO_I8042 is not set

@ -51,7 +51,6 @@ CONFIG_PS3_DISK=m
CONFIG_PS3_ROM=m
CONFIG_PS3_FLASH=m
CONFIG_PS3_LPM=m
CONFIG_PPC_IBM_CELL_BLADE=y
CONFIG_RTAS_FLASH=m
CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y
CONFIG_CPU_FREQ_GOV_POWERSAVE=y
@ -228,7 +227,6 @@ CONFIG_NETXEN_NIC=m
CONFIG_SUNGEM=y
CONFIG_GELIC_NET=m
CONFIG_GELIC_WIRELESS=y
CONFIG_SPIDER_NET=m
CONFIG_BROADCOM_PHY=m
CONFIG_MARVELL_PHY=y
CONFIG_PPP=m

@ -51,3 +51,4 @@ $(obj)/aesp8-ppc.S $(obj)/ghashp8-ppc.S: $(obj)/%.S: $(src)/%.pl FORCE
OBJECT_FILES_NON_STANDARD_aesp10-ppc.o := y
OBJECT_FILES_NON_STANDARD_ghashp10-ppc.o := y
OBJECT_FILES_NON_STANDARD_aesp8-ppc.o := y
OBJECT_FILES_NON_STANDARD_ghashp8-ppc.o := y

@ -20,36 +20,9 @@
/* Macros for the pm_control register. */
#define CBE_PM_16BIT_CTR(ctr) (1 << (24 - ((ctr) & (NR_PHYS_CTRS - 1))))
#define CBE_PM_ENABLE_PERF_MON 0x80000000
#define CBE_PM_STOP_AT_MAX 0x40000000
#define CBE_PM_TRACE_MODE_GET(pm_control) (((pm_control) >> 28) & 0x3)
#define CBE_PM_TRACE_MODE_SET(mode) (((mode) & 0x3) << 28)
#define CBE_PM_TRACE_BUF_OVFLW(bit) (((bit) & 0x1) << 17)
#define CBE_PM_COUNT_MODE_SET(count) (((count) & 0x3) << 18)
#define CBE_PM_FREEZE_ALL_CTRS 0x00100000
#define CBE_PM_ENABLE_EXT_TRACE 0x00008000
#define CBE_PM_SPU_ADDR_TRACE_SET(msk) (((msk) & 0x3) << 9)
/* Macros for the trace_address register. */
#define CBE_PM_TRACE_BUF_FULL 0x00000800
#define CBE_PM_TRACE_BUF_EMPTY 0x00000400
#define CBE_PM_TRACE_BUF_DATA_COUNT(ta) ((ta) & 0x3ff)
#define CBE_PM_TRACE_BUF_MAX_COUNT 0x400
/* Macros for the pm07_control registers. */
#define CBE_PM_CTR_INPUT_MUX(pm07_control) (((pm07_control) >> 26) & 0x3f)
#define CBE_PM_CTR_INPUT_CONTROL 0x02000000
#define CBE_PM_CTR_POLARITY 0x01000000
#define CBE_PM_CTR_COUNT_CYCLES 0x00800000
#define CBE_PM_CTR_ENABLE 0x00400000
#define PM07_CTR_INPUT_MUX(x) (((x) & 0x3F) << 26)
#define PM07_CTR_INPUT_CONTROL(x) (((x) & 1) << 25)
#define PM07_CTR_POLARITY(x) (((x) & 1) << 24)
#define PM07_CTR_COUNT_CYCLES(x) (((x) & 1) << 23)
#define PM07_CTR_ENABLE(x) (((x) & 1) << 22)
/* Macros for the pm_status register. */
#define CBE_PM_CTR_OVERFLOW_INTR(ctr) (1 << (31 - ((ctr) & 7)))
enum pm_reg_name {
group_control,
@ -62,33 +35,4 @@ enum pm_reg_name {
pm_start_stop,
};
/* Routines for reading/writing the PMU registers. */
extern u32 cbe_read_phys_ctr(u32 cpu, u32 phys_ctr);
extern void cbe_write_phys_ctr(u32 cpu, u32 phys_ctr, u32 val);
extern u32 cbe_read_ctr(u32 cpu, u32 ctr);
extern void cbe_write_ctr(u32 cpu, u32 ctr, u32 val);
extern u32 cbe_read_pm07_control(u32 cpu, u32 ctr);
extern void cbe_write_pm07_control(u32 cpu, u32 ctr, u32 val);
extern u32 cbe_read_pm(u32 cpu, enum pm_reg_name reg);
extern void cbe_write_pm(u32 cpu, enum pm_reg_name reg, u32 val);
extern u32 cbe_get_ctr_size(u32 cpu, u32 phys_ctr);
extern void cbe_set_ctr_size(u32 cpu, u32 phys_ctr, u32 ctr_size);
extern void cbe_enable_pm(u32 cpu);
extern void cbe_disable_pm(u32 cpu);
extern void cbe_read_trace_buffer(u32 cpu, u64 *buf);
extern void cbe_enable_pm_interrupts(u32 cpu, u32 thread, u32 mask);
extern void cbe_disable_pm_interrupts(u32 cpu);
extern u32 cbe_get_and_clear_pm_interrupts(u32 cpu);
extern void cbe_sync_irq(int node);
#define CBE_COUNT_SUPERVISOR_MODE 0
#define CBE_COUNT_HYPERVISOR_MODE 1
#define CBE_COUNT_PROBLEM_MODE 2
#define CBE_COUNT_ALL_MODES 3
#endif /* __ASM_CELL_PMU_H__ */

@ -18,293 +18,6 @@
#include <asm/cell-pmu.h>
/*
*
* Some HID register definitions
*
*/
/* CBE specific HID0 bits */
#define HID0_CBE_THERM_WAKEUP 0x0000020000000000ul
#define HID0_CBE_SYSERR_WAKEUP 0x0000008000000000ul
#define HID0_CBE_THERM_INT_EN 0x0000000400000000ul
#define HID0_CBE_SYSERR_INT_EN 0x0000000200000000ul
#define MAX_CBE 2
/*
*
* Pervasive unit register definitions
*
*/
union spe_reg {
u64 val;
u8 spe[8];
};
union ppe_spe_reg {
u64 val;
struct {
u32 ppe;
u32 spe;
};
};
struct cbe_pmd_regs {
/* Debug Bus Control */
u64 pad_0x0000; /* 0x0000 */
u64 group_control; /* 0x0008 */
u8 pad_0x0010_0x00a8 [0x00a8 - 0x0010]; /* 0x0010 */
u64 debug_bus_control; /* 0x00a8 */
u8 pad_0x00b0_0x0100 [0x0100 - 0x00b0]; /* 0x00b0 */
u64 trace_aux_data; /* 0x0100 */
u64 trace_buffer_0_63; /* 0x0108 */
u64 trace_buffer_64_127; /* 0x0110 */
u64 trace_address; /* 0x0118 */
u64 ext_tr_timer; /* 0x0120 */
u8 pad_0x0128_0x0400 [0x0400 - 0x0128]; /* 0x0128 */
/* Performance Monitor */
u64 pm_status; /* 0x0400 */
u64 pm_control; /* 0x0408 */
u64 pm_interval; /* 0x0410 */
u64 pm_ctr[4]; /* 0x0418 */
u64 pm_start_stop; /* 0x0438 */
u64 pm07_control[8]; /* 0x0440 */
u8 pad_0x0480_0x0800 [0x0800 - 0x0480]; /* 0x0480 */
/* Thermal Sensor Registers */
union spe_reg ts_ctsr1; /* 0x0800 */
u64 ts_ctsr2; /* 0x0808 */
union spe_reg ts_mtsr1; /* 0x0810 */
u64 ts_mtsr2; /* 0x0818 */
union spe_reg ts_itr1; /* 0x0820 */
u64 ts_itr2; /* 0x0828 */
u64 ts_gitr; /* 0x0830 */
u64 ts_isr; /* 0x0838 */
u64 ts_imr; /* 0x0840 */
union spe_reg tm_cr1; /* 0x0848 */
u64 tm_cr2; /* 0x0850 */
u64 tm_simr; /* 0x0858 */
union ppe_spe_reg tm_tpr; /* 0x0860 */
union spe_reg tm_str1; /* 0x0868 */
u64 tm_str2; /* 0x0870 */
union ppe_spe_reg tm_tsr; /* 0x0878 */
/* Power Management */
u64 pmcr; /* 0x0880 */
#define CBE_PMD_PAUSE_ZERO_CONTROL 0x10000
u64 pmsr; /* 0x0888 */
/* Time Base Register */
u64 tbr; /* 0x0890 */
u8 pad_0x0898_0x0c00 [0x0c00 - 0x0898]; /* 0x0898 */
/* Fault Isolation Registers */
u64 checkstop_fir; /* 0x0c00 */
u64 recoverable_fir; /* 0x0c08 */
u64 spec_att_mchk_fir; /* 0x0c10 */
u32 fir_mode_reg; /* 0x0c18 */
u8 pad_0x0c1c_0x0c20 [4]; /* 0x0c1c */
#define CBE_PMD_FIR_MODE_M8 0x00800
u64 fir_enable_mask; /* 0x0c20 */
u8 pad_0x0c28_0x0ca8 [0x0ca8 - 0x0c28]; /* 0x0c28 */
u64 ras_esc_0; /* 0x0ca8 */
u8 pad_0x0cb0_0x1000 [0x1000 - 0x0cb0]; /* 0x0cb0 */
};
extern struct cbe_pmd_regs __iomem *cbe_get_pmd_regs(struct device_node *np);
extern struct cbe_pmd_regs __iomem *cbe_get_cpu_pmd_regs(int cpu);
/*
* PMU shadow registers
*
* Many of the registers in the performance monitoring unit are write-only,
* so we need to save a copy of what we write to those registers.
*
* The actual data counters are read/write. However, writing to the counters
* only takes effect if the PMU is enabled. Otherwise the value is stored in
* a hardware latch until the next time the PMU is enabled. So we save a copy
* of the counter values if we need to read them back while the PMU is
* disabled. The counter_value_in_latch field is a bitmap indicating which
* counters currently have a value waiting to be written.
*/
struct cbe_pmd_shadow_regs {
u32 group_control;
u32 debug_bus_control;
u32 trace_address;
u32 ext_tr_timer;
u32 pm_status;
u32 pm_control;
u32 pm_interval;
u32 pm_start_stop;
u32 pm07_control[NR_CTRS];
u32 pm_ctr[NR_PHYS_CTRS];
u32 counter_value_in_latch;
};
extern struct cbe_pmd_shadow_regs *cbe_get_pmd_shadow_regs(struct device_node *np);
extern struct cbe_pmd_shadow_regs *cbe_get_cpu_pmd_shadow_regs(int cpu);
/*
*
* IIC unit register definitions
*
*/
struct cbe_iic_pending_bits {
u32 data;
u8 flags;
u8 class;
u8 source;
u8 prio;
};
#define CBE_IIC_IRQ_VALID 0x80
#define CBE_IIC_IRQ_IPI 0x40
struct cbe_iic_thread_regs {
struct cbe_iic_pending_bits pending;
struct cbe_iic_pending_bits pending_destr;
u64 generate;
u64 prio;
};
struct cbe_iic_regs {
u8 pad_0x0000_0x0400[0x0400 - 0x0000]; /* 0x0000 */
/* IIC interrupt registers */
struct cbe_iic_thread_regs thread[2]; /* 0x0400 */
u64 iic_ir; /* 0x0440 */
#define CBE_IIC_IR_PRIO(x) (((x) & 0xf) << 12)
#define CBE_IIC_IR_DEST_NODE(x) (((x) & 0xf) << 4)
#define CBE_IIC_IR_DEST_UNIT(x) ((x) & 0xf)
#define CBE_IIC_IR_IOC_0 0x0
#define CBE_IIC_IR_IOC_1S 0xb
#define CBE_IIC_IR_PT_0 0xe
#define CBE_IIC_IR_PT_1 0xf
u64 iic_is; /* 0x0448 */
#define CBE_IIC_IS_PMI 0x2
u8 pad_0x0450_0x0500[0x0500 - 0x0450]; /* 0x0450 */
/* IOC FIR */
u64 ioc_fir_reset; /* 0x0500 */
u64 ioc_fir_set; /* 0x0508 */
u64 ioc_checkstop_enable; /* 0x0510 */
u64 ioc_fir_error_mask; /* 0x0518 */
u64 ioc_syserr_enable; /* 0x0520 */
u64 ioc_fir; /* 0x0528 */
u8 pad_0x0530_0x1000[0x1000 - 0x0530]; /* 0x0530 */
};
extern struct cbe_iic_regs __iomem *cbe_get_iic_regs(struct device_node *np);
extern struct cbe_iic_regs __iomem *cbe_get_cpu_iic_regs(int cpu);
struct cbe_mic_tm_regs {
u8 pad_0x0000_0x0040[0x0040 - 0x0000]; /* 0x0000 */
u64 mic_ctl_cnfg2; /* 0x0040 */
#define CBE_MIC_ENABLE_AUX_TRC 0x8000000000000000LL
#define CBE_MIC_DISABLE_PWR_SAV_2 0x0200000000000000LL
#define CBE_MIC_DISABLE_AUX_TRC_WRAP 0x0100000000000000LL
#define CBE_MIC_ENABLE_AUX_TRC_INT 0x0080000000000000LL
u64 pad_0x0048; /* 0x0048 */
u64 mic_aux_trc_base; /* 0x0050 */
u64 mic_aux_trc_max_addr; /* 0x0058 */
u64 mic_aux_trc_cur_addr; /* 0x0060 */
u64 mic_aux_trc_grf_addr; /* 0x0068 */
u64 mic_aux_trc_grf_data; /* 0x0070 */
u64 pad_0x0078; /* 0x0078 */
u64 mic_ctl_cnfg_0; /* 0x0080 */
#define CBE_MIC_DISABLE_PWR_SAV_0 0x8000000000000000LL
u64 pad_0x0088; /* 0x0088 */
u64 slow_fast_timer_0; /* 0x0090 */
u64 slow_next_timer_0; /* 0x0098 */
u8 pad_0x00a0_0x00f8[0x00f8 - 0x00a0]; /* 0x00a0 */
u64 mic_df_ecc_address_0; /* 0x00f8 */
u8 pad_0x0100_0x01b8[0x01b8 - 0x0100]; /* 0x0100 */
u64 mic_df_ecc_address_1; /* 0x01b8 */
u64 mic_ctl_cnfg_1; /* 0x01c0 */
#define CBE_MIC_DISABLE_PWR_SAV_1 0x8000000000000000LL
u64 pad_0x01c8; /* 0x01c8 */
u64 slow_fast_timer_1; /* 0x01d0 */
u64 slow_next_timer_1; /* 0x01d8 */
u8 pad_0x01e0_0x0208[0x0208 - 0x01e0]; /* 0x01e0 */
u64 mic_exc; /* 0x0208 */
#define CBE_MIC_EXC_BLOCK_SCRUB 0x0800000000000000ULL
#define CBE_MIC_EXC_FAST_SCRUB 0x0100000000000000ULL
u64 mic_mnt_cfg; /* 0x0210 */
#define CBE_MIC_MNT_CFG_CHAN_0_POP 0x0002000000000000ULL
#define CBE_MIC_MNT_CFG_CHAN_1_POP 0x0004000000000000ULL
u64 mic_df_config; /* 0x0218 */
#define CBE_MIC_ECC_DISABLE_0 0x4000000000000000ULL
#define CBE_MIC_ECC_REP_SINGLE_0 0x2000000000000000ULL
#define CBE_MIC_ECC_DISABLE_1 0x0080000000000000ULL
#define CBE_MIC_ECC_REP_SINGLE_1 0x0040000000000000ULL
u8 pad_0x0220_0x0230[0x0230 - 0x0220]; /* 0x0220 */
u64 mic_fir; /* 0x0230 */
#define CBE_MIC_FIR_ECC_SINGLE_0_ERR 0x0200000000000000ULL
#define CBE_MIC_FIR_ECC_MULTI_0_ERR 0x0100000000000000ULL
#define CBE_MIC_FIR_ECC_SINGLE_1_ERR 0x0080000000000000ULL
#define CBE_MIC_FIR_ECC_MULTI_1_ERR 0x0040000000000000ULL
#define CBE_MIC_FIR_ECC_ERR_MASK 0xffff000000000000ULL
#define CBE_MIC_FIR_ECC_SINGLE_0_CTE 0x0000020000000000ULL
#define CBE_MIC_FIR_ECC_MULTI_0_CTE 0x0000010000000000ULL
#define CBE_MIC_FIR_ECC_SINGLE_1_CTE 0x0000008000000000ULL
#define CBE_MIC_FIR_ECC_MULTI_1_CTE 0x0000004000000000ULL
#define CBE_MIC_FIR_ECC_CTE_MASK 0x0000ffff00000000ULL
#define CBE_MIC_FIR_ECC_SINGLE_0_RESET 0x0000000002000000ULL
#define CBE_MIC_FIR_ECC_MULTI_0_RESET 0x0000000001000000ULL
#define CBE_MIC_FIR_ECC_SINGLE_1_RESET 0x0000000000800000ULL
#define CBE_MIC_FIR_ECC_MULTI_1_RESET 0x0000000000400000ULL
#define CBE_MIC_FIR_ECC_RESET_MASK 0x00000000ffff0000ULL
#define CBE_MIC_FIR_ECC_SINGLE_0_SET 0x0000000000000200ULL
#define CBE_MIC_FIR_ECC_MULTI_0_SET 0x0000000000000100ULL
#define CBE_MIC_FIR_ECC_SINGLE_1_SET 0x0000000000000080ULL
#define CBE_MIC_FIR_ECC_MULTI_1_SET 0x0000000000000040ULL
#define CBE_MIC_FIR_ECC_SET_MASK 0x000000000000ffffULL
u64 mic_fir_debug; /* 0x0238 */
u8 pad_0x0240_0x1000[0x1000 - 0x0240]; /* 0x0240 */
};
extern struct cbe_mic_tm_regs __iomem *cbe_get_mic_tm_regs(struct device_node *np);
extern struct cbe_mic_tm_regs __iomem *cbe_get_cpu_mic_tm_regs(int cpu);
/* Cell page table entries */
#define CBE_IOPTE_PP_W 0x8000000000000000ul /* protection: write */
#define CBE_IOPTE_PP_R 0x4000000000000000ul /* protection: read */
@ -315,13 +28,4 @@ extern struct cbe_mic_tm_regs __iomem *cbe_get_cpu_mic_tm_regs(int cpu);
#define CBE_IOPTE_H 0x0000000000000800ul /* cache hint */
#define CBE_IOPTE_IOID_Mask 0x00000000000007fful /* ioid */
/* some utility functions to deal with SMT */
extern u32 cbe_get_hw_thread_id(int cpu);
extern u32 cbe_cpu_to_node(int cpu);
extern u32 cbe_node_to_cpu(int node);
/* Init this module early */
extern void cbe_regs_init(void);
#endif /* CBE_REGS_H */

@ -1,36 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* (c) Copyright 2006 Benjamin Herrenschmidt, IBM Corp.
* <benh@kernel.crashing.org>
*/
#ifndef _ASM_POWERPC_DCR_GENERIC_H
#define _ASM_POWERPC_DCR_GENERIC_H
#ifdef __KERNEL__
#ifndef __ASSEMBLY__
enum host_type_t {DCR_HOST_MMIO, DCR_HOST_NATIVE, DCR_HOST_INVALID};
typedef struct {
enum host_type_t type;
union {
dcr_host_mmio_t mmio;
dcr_host_native_t native;
} host;
} dcr_host_t;
extern bool dcr_map_ok_generic(dcr_host_t host);
extern dcr_host_t dcr_map_generic(struct device_node *dev, unsigned int dcr_n,
unsigned int dcr_c);
extern void dcr_unmap_generic(dcr_host_t host, unsigned int dcr_c);
extern u32 dcr_read_generic(dcr_host_t host, unsigned int dcr_n);
extern void dcr_write_generic(dcr_host_t host, unsigned int dcr_n, u32 value);
#endif /* __ASSEMBLY__ */
#endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_DCR_GENERIC_H */

@ -1,44 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* (c) Copyright 2006 Benjamin Herrenschmidt, IBM Corp.
* <benh@kernel.crashing.org>
*/
#ifndef _ASM_POWERPC_DCR_MMIO_H
#define _ASM_POWERPC_DCR_MMIO_H
#ifdef __KERNEL__
#include <asm/io.h>
typedef struct {
void __iomem *token;
unsigned int stride;
unsigned int base;
} dcr_host_mmio_t;
static inline bool dcr_map_ok_mmio(dcr_host_mmio_t host)
{
return host.token != NULL;
}
extern dcr_host_mmio_t dcr_map_mmio(struct device_node *dev,
unsigned int dcr_n,
unsigned int dcr_c);
extern void dcr_unmap_mmio(dcr_host_mmio_t host, unsigned int dcr_c);
static inline u32 dcr_read_mmio(dcr_host_mmio_t host, unsigned int dcr_n)
{
return in_be32(host.token + ((host.base + dcr_n) * host.stride));
}
static inline void dcr_write_mmio(dcr_host_mmio_t host,
unsigned int dcr_n,
u32 value)
{
out_be32(host.token + ((host.base + dcr_n) * host.stride), value);
}
#endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_DCR_MMIO_H */

@ -10,46 +10,14 @@
#ifndef __ASSEMBLY__
#ifdef CONFIG_PPC_DCR
#ifdef CONFIG_PPC_DCR_NATIVE
#include <asm/dcr-native.h>
#endif
#ifdef CONFIG_PPC_DCR_MMIO
#include <asm/dcr-mmio.h>
#endif
/* Indirection layer for providing both NATIVE and MMIO support. */
#if defined(CONFIG_PPC_DCR_NATIVE) && defined(CONFIG_PPC_DCR_MMIO)
#include <asm/dcr-generic.h>
#define DCR_MAP_OK(host) dcr_map_ok_generic(host)
#define dcr_map(dev, dcr_n, dcr_c) dcr_map_generic(dev, dcr_n, dcr_c)
#define dcr_unmap(host, dcr_c) dcr_unmap_generic(host, dcr_c)
#define dcr_read(host, dcr_n) dcr_read_generic(host, dcr_n)
#define dcr_write(host, dcr_n, value) dcr_write_generic(host, dcr_n, value)
#else
#ifdef CONFIG_PPC_DCR_NATIVE
typedef dcr_host_native_t dcr_host_t;
#define DCR_MAP_OK(host) dcr_map_ok_native(host)
#define dcr_map(dev, dcr_n, dcr_c) dcr_map_native(dev, dcr_n, dcr_c)
#define dcr_unmap(host, dcr_c) dcr_unmap_native(host, dcr_c)
#define dcr_read(host, dcr_n) dcr_read_native(host, dcr_n)
#define dcr_write(host, dcr_n, value) dcr_write_native(host, dcr_n, value)
#else
typedef dcr_host_mmio_t dcr_host_t;
#define DCR_MAP_OK(host) dcr_map_ok_mmio(host)
#define dcr_map(dev, dcr_n, dcr_c) dcr_map_mmio(dev, dcr_n, dcr_c)
#define dcr_unmap(host, dcr_c) dcr_unmap_mmio(host, dcr_c)
#define dcr_read(host, dcr_n) dcr_read_mmio(host, dcr_n)
#define dcr_write(host, dcr_n, value) dcr_write_mmio(host, dcr_n, value)
#endif
#endif /* defined(CONFIG_PPC_DCR_NATIVE) && defined(CONFIG_PPC_DCR_MMIO) */
/*
* additional helpers to read the DCR * base from the device-tree

@ -348,6 +348,7 @@
#define H_SCM_FLUSH 0x44C
#define H_GET_ENERGY_SCALE_INFO 0x450
#define H_PKS_SIGNED_UPDATE 0x454
#define H_HTM 0x458
#define H_WATCHDOG 0x45C
#define H_GUEST_GET_CAPABILITIES 0x460
#define H_GUEST_SET_CAPABILITIES 0x464
@ -498,6 +499,39 @@
#define H_GUEST_CAP_POWER11 (1UL<<(63-3))
#define H_GUEST_CAP_BITMAP2 (1UL<<(63-63))
/*
* Defines for H_HTM - Macros for hardware trace macro (HTM) function.
*/
#define H_HTM_FLAGS_HARDWARE_TARGET (1ul << 63)
#define H_HTM_FLAGS_LOGICAL_TARGET (1ul << 62)
#define H_HTM_FLAGS_PROCID_TARGET (1ul << 61)
#define H_HTM_FLAGS_NOWRAP (1ul << 60)
#define H_HTM_OP_SHIFT (63-15)
#define H_HTM_OP(x) ((unsigned long)(x)<<H_HTM_OP_SHIFT)
#define H_HTM_OP_CAPABILITIES 0x01
#define H_HTM_OP_STATUS 0x02
#define H_HTM_OP_SETUP 0x03
#define H_HTM_OP_CONFIGURE 0x04
#define H_HTM_OP_START 0x05
#define H_HTM_OP_STOP 0x06
#define H_HTM_OP_DECONFIGURE 0x07
#define H_HTM_OP_DUMP_DETAILS 0x08
#define H_HTM_OP_DUMP_DATA 0x09
#define H_HTM_OP_DUMP_SYSMEM_CONF 0x0a
#define H_HTM_OP_DUMP_SYSPROC_CONF 0x0b
#define H_HTM_TYPE_SHIFT (63-31)
#define H_HTM_TYPE(x) ((unsigned long)(x)<<H_HTM_TYPE_SHIFT)
#define H_HTM_TYPE_NEST 0x01
#define H_HTM_TYPE_CORE 0x02
#define H_HTM_TYPE_LLAT 0x03
#define H_HTM_TYPE_GLOBAL 0xff
#define H_HTM_TARGET_NODE_INDEX(x) ((unsigned long)(x)<<(63-15))
#define H_HTM_TARGET_NODAL_CHIP_INDEX(x) ((unsigned long)(x)<<(63-31))
#define H_HTM_TARGET_CORE_INDEX_ON_CHIP(x) ((unsigned long)(x)<<(63-47))
#ifndef __ASSEMBLY__
#include <linux/types.h>

@ -1,61 +1,15 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* This file is meant to be include multiple times by other headers */
/* last 2 argments are used by platforms/cell/io-workarounds.[ch] */
DEF_PCI_AC_RET(readb, u8, (const PCI_IO_ADDR addr), (addr), mem, addr)
DEF_PCI_AC_RET(readw, u16, (const PCI_IO_ADDR addr), (addr), mem, addr)
DEF_PCI_AC_RET(readl, u32, (const PCI_IO_ADDR addr), (addr), mem, addr)
DEF_PCI_AC_RET(readw_be, u16, (const PCI_IO_ADDR addr), (addr), mem, addr)
DEF_PCI_AC_RET(readl_be, u32, (const PCI_IO_ADDR addr), (addr), mem, addr)
DEF_PCI_AC_NORET(writeb, (u8 val, PCI_IO_ADDR addr), (val, addr), mem, addr)
DEF_PCI_AC_NORET(writew, (u16 val, PCI_IO_ADDR addr), (val, addr), mem, addr)
DEF_PCI_AC_NORET(writel, (u32 val, PCI_IO_ADDR addr), (val, addr), mem, addr)
DEF_PCI_AC_NORET(writew_be, (u16 val, PCI_IO_ADDR addr), (val, addr), mem, addr)
DEF_PCI_AC_NORET(writel_be, (u32 val, PCI_IO_ADDR addr), (val, addr), mem, addr)
#ifdef __powerpc64__
DEF_PCI_AC_RET(readq, u64, (const PCI_IO_ADDR addr), (addr), mem, addr)
DEF_PCI_AC_RET(readq_be, u64, (const PCI_IO_ADDR addr), (addr), mem, addr)
DEF_PCI_AC_NORET(writeq, (u64 val, PCI_IO_ADDR addr), (val, addr), mem, addr)
DEF_PCI_AC_NORET(writeq_be, (u64 val, PCI_IO_ADDR addr), (val, addr), mem, addr)
#endif /* __powerpc64__ */
DEF_PCI_AC_RET(inb, u8, (unsigned long port), (port), pio, port)
DEF_PCI_AC_RET(inw, u16, (unsigned long port), (port), pio, port)
DEF_PCI_AC_RET(inl, u32, (unsigned long port), (port), pio, port)
DEF_PCI_AC_NORET(outb, (u8 val, unsigned long port), (val, port), pio, port)
DEF_PCI_AC_NORET(outw, (u16 val, unsigned long port), (val, port), pio, port)
DEF_PCI_AC_NORET(outl, (u32 val, unsigned long port), (val, port), pio, port)
DEF_PCI_AC_NORET(readsb, (const PCI_IO_ADDR a, void *b, unsigned long c),
(a, b, c), mem, a)
DEF_PCI_AC_NORET(readsw, (const PCI_IO_ADDR a, void *b, unsigned long c),
(a, b, c), mem, a)
DEF_PCI_AC_NORET(readsl, (const PCI_IO_ADDR a, void *b, unsigned long c),
(a, b, c), mem, a)
DEF_PCI_AC_NORET(writesb, (PCI_IO_ADDR a, const void *b, unsigned long c),
(a, b, c), mem, a)
DEF_PCI_AC_NORET(writesw, (PCI_IO_ADDR a, const void *b, unsigned long c),
(a, b, c), mem, a)
DEF_PCI_AC_NORET(writesl, (PCI_IO_ADDR a, const void *b, unsigned long c),
(a, b, c), mem, a)
DEF_PCI_AC_NORET(insb, (unsigned long p, void *b, unsigned long c),
(p, b, c), pio, p)
DEF_PCI_AC_NORET(insw, (unsigned long p, void *b, unsigned long c),
(p, b, c), pio, p)
DEF_PCI_AC_NORET(insl, (unsigned long p, void *b, unsigned long c),
(p, b, c), pio, p)
DEF_PCI_AC_NORET(outsb, (unsigned long p, const void *b, unsigned long c),
(p, b, c), pio, p)
DEF_PCI_AC_NORET(outsw, (unsigned long p, const void *b, unsigned long c),
(p, b, c), pio, p)
DEF_PCI_AC_NORET(outsl, (unsigned long p, const void *b, unsigned long c),
(p, b, c), pio, p)
DEF_PCI_AC_NORET(memset_io, (PCI_IO_ADDR a, int c, unsigned long n),
(a, c, n), mem, a)
DEF_PCI_AC_NORET(memcpy_fromio, (void *d, const PCI_IO_ADDR s, unsigned long n),
(d, s, n), mem, s)
DEF_PCI_AC_NORET(memcpy_toio, (PCI_IO_ADDR d, const void *s, unsigned long n),
(d, s, n), mem, d)
DEF_PCI_AC_RET(inb, u8, (unsigned long port), (port))
DEF_PCI_AC_RET(inw, u16, (unsigned long port), (port))
DEF_PCI_AC_RET(inl, u32, (unsigned long port), (port))
DEF_PCI_AC_NORET(outb, (u8 val, unsigned long port), (val, port))
DEF_PCI_AC_NORET(outw, (u16 val, unsigned long port), (val, port))
DEF_PCI_AC_NORET(outl, (u32 val, unsigned long port), (val, port))
DEF_PCI_AC_NORET(insb, (unsigned long p, void *b, unsigned long c), (p, b, c))
DEF_PCI_AC_NORET(insw, (unsigned long p, void *b, unsigned long c), (p, b, c))
DEF_PCI_AC_NORET(insl, (unsigned long p, void *b, unsigned long c), (p, b, c))
DEF_PCI_AC_NORET(outsb, (unsigned long p, const void *b, unsigned long c), (p, b, c))
DEF_PCI_AC_NORET(outsw, (unsigned long p, const void *b, unsigned long c), (p, b, c))
DEF_PCI_AC_NORET(outsl, (unsigned long p, const void *b, unsigned long c), (p, b, c))

@ -1,55 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* Support PCI IO workaround
*
* (C) Copyright 2007-2008 TOSHIBA CORPORATION
*/
#ifndef _IO_WORKAROUNDS_H
#define _IO_WORKAROUNDS_H
#ifdef CONFIG_PPC_IO_WORKAROUNDS
#include <linux/io.h>
#include <asm/pci-bridge.h>
/* Bus info */
struct iowa_bus {
struct pci_controller *phb;
struct ppc_pci_io *ops;
void *private;
};
void iowa_register_bus(struct pci_controller *, struct ppc_pci_io *,
int (*)(struct iowa_bus *, void *), void *);
struct iowa_bus *iowa_mem_find_bus(const PCI_IO_ADDR);
struct iowa_bus *iowa_pio_find_bus(unsigned long);
extern struct ppc_pci_io spiderpci_ops;
extern int spiderpci_iowa_init(struct iowa_bus *, void *);
#define SPIDER_PCI_REG_BASE 0xd000
#define SPIDER_PCI_REG_SIZE 0x1000
#define SPIDER_PCI_VCI_CNTL_STAT 0x0110
#define SPIDER_PCI_DUMMY_READ 0x0810
#define SPIDER_PCI_DUMMY_READ_BASE 0x0814
#endif
#if defined(CONFIG_PPC_IO_WORKAROUNDS) && defined(CONFIG_PPC_INDIRECT_MMIO)
extern bool io_workaround_inited;
static inline bool iowa_is_active(void)
{
return unlikely(io_workaround_inited);
}
#else
static inline bool iowa_is_active(void)
{
return false;
}
#endif
void __iomem *iowa_ioremap(phys_addr_t addr, unsigned long size,
pgprot_t prot, void *caller);
#endif /* _IO_WORKAROUNDS_H */

@ -65,8 +65,8 @@ extern resource_size_t isa_mem_base;
extern bool isa_io_special;
#ifdef CONFIG_PPC32
#if defined(CONFIG_PPC_INDIRECT_PIO) || defined(CONFIG_PPC_INDIRECT_MMIO)
#error CONFIG_PPC_INDIRECT_{PIO,MMIO} are not yet supported on 32 bits
#ifdef CONFIG_PPC_INDIRECT_PIO
#error CONFIG_PPC_INDIRECT_PIO is not yet supported on 32 bits
#endif
#endif
@ -80,16 +80,12 @@ extern bool isa_io_special;
*
* in_8, in_le16, in_be16, in_le32, in_be32, in_le64, in_be64
* out_8, out_le16, out_be16, out_le32, out_be32, out_le64, out_be64
* _insb, _insw_ns, _insl_ns, _outsb, _outsw_ns, _outsl_ns
* _insb, _insw, _insl, _outsb, _outsw, _outsl
*
* Those operate directly on a kernel virtual address. Note that the prototype
* for the out_* accessors has the arguments in opposite order from the usual
* linux PCI accessors. Unlike those, they take the address first and the value
* next.
*
* Note: I might drop the _ns suffix on the stream operations soon as it is
* simply normal for stream operations to not swap in the first place.
*
*/
/* -mprefixed can generate offsets beyond range, fall back hack */
@ -228,19 +224,10 @@ static inline void out_be64(volatile u64 __iomem *addr, u64 val)
*/
extern void _insb(const volatile u8 __iomem *addr, void *buf, long count);
extern void _outsb(volatile u8 __iomem *addr,const void *buf,long count);
extern void _insw_ns(const volatile u16 __iomem *addr, void *buf, long count);
extern void _outsw_ns(volatile u16 __iomem *addr, const void *buf, long count);
extern void _insl_ns(const volatile u32 __iomem *addr, void *buf, long count);
extern void _outsl_ns(volatile u32 __iomem *addr, const void *buf, long count);
/* The _ns naming is historical and will be removed. For now, just #define
* the non _ns equivalent names
*/
#define _insw _insw_ns
#define _insl _insl_ns
#define _outsw _outsw_ns
#define _outsl _outsl_ns
extern void _insw(const volatile u16 __iomem *addr, void *buf, long count);
extern void _outsw(volatile u16 __iomem *addr, const void *buf, long count);
extern void _insl(const volatile u32 __iomem *addr, void *buf, long count);
extern void _outsl(volatile u32 __iomem *addr, const void *buf, long count);
/*
* memset_io, memcpy_toio, memcpy_fromio base implementations are out of line
@ -261,9 +248,9 @@ extern void _memcpy_toio(volatile void __iomem *dest, const void *src,
* for PowerPC is as close as possible to the x86 version of these, and thus
* provides fairly heavy weight barriers for the non-raw versions
*
* In addition, they support a hook mechanism when CONFIG_PPC_INDIRECT_MMIO
* or CONFIG_PPC_INDIRECT_PIO are set allowing the platform to provide its
* own implementation of some or all of the accessors.
* In addition, they support a hook mechanism when CONFIG_PPC_INDIRECT_PIO
* is set allowing the platform to provide its own implementation of some
* of the accessors.
*/
/*
@ -274,116 +261,11 @@ extern void _memcpy_toio(volatile void __iomem *dest, const void *src,
#include <asm/eeh.h>
#endif
/* Shortcut to the MMIO argument pointer */
#define PCI_IO_ADDR volatile void __iomem *
/* Indirect IO address tokens:
*
* When CONFIG_PPC_INDIRECT_MMIO is set, the platform can provide hooks
* on all MMIOs. (Note that this is all 64 bits only for now)
*
* To help platforms who may need to differentiate MMIO addresses in
* their hooks, a bitfield is reserved for use by the platform near the
* top of MMIO addresses (not PIO, those have to cope the hard way).
*
* The highest address in the kernel virtual space are:
*
* d0003fffffffffff # with Hash MMU
* c00fffffffffffff # with Radix MMU
*
* The top 4 bits are reserved as the region ID on hash, leaving us 8 bits
* that can be used for the field.
*
* The direct IO mapping operations will then mask off those bits
* before doing the actual access, though that only happen when
* CONFIG_PPC_INDIRECT_MMIO is set, thus be careful when you use that
* mechanism
*
* For PIO, there is a separate CONFIG_PPC_INDIRECT_PIO which makes
* all PIO functions call through a hook.
*/
#ifdef CONFIG_PPC_INDIRECT_MMIO
#define PCI_IO_IND_TOKEN_SHIFT 52
#define PCI_IO_IND_TOKEN_MASK (0xfful << PCI_IO_IND_TOKEN_SHIFT)
#define PCI_FIX_ADDR(addr) \
((PCI_IO_ADDR)(((unsigned long)(addr)) & ~PCI_IO_IND_TOKEN_MASK))
#define PCI_GET_ADDR_TOKEN(addr) \
(((unsigned long)(addr) & PCI_IO_IND_TOKEN_MASK) >> \
PCI_IO_IND_TOKEN_SHIFT)
#define PCI_SET_ADDR_TOKEN(addr, token) \
do { \
unsigned long __a = (unsigned long)(addr); \
__a &= ~PCI_IO_IND_TOKEN_MASK; \
__a |= ((unsigned long)(token)) << PCI_IO_IND_TOKEN_SHIFT; \
(addr) = (void __iomem *)__a; \
} while(0)
#else
#define PCI_FIX_ADDR(addr) (addr)
#endif
/*
* Non ordered and non-swapping "raw" accessors
*/
static inline unsigned char __raw_readb(const volatile void __iomem *addr)
{
return *(volatile unsigned char __force *)PCI_FIX_ADDR(addr);
}
#define __raw_readb __raw_readb
static inline unsigned short __raw_readw(const volatile void __iomem *addr)
{
return *(volatile unsigned short __force *)PCI_FIX_ADDR(addr);
}
#define __raw_readw __raw_readw
static inline unsigned int __raw_readl(const volatile void __iomem *addr)
{
return *(volatile unsigned int __force *)PCI_FIX_ADDR(addr);
}
#define __raw_readl __raw_readl
static inline void __raw_writeb(unsigned char v, volatile void __iomem *addr)
{
*(volatile unsigned char __force *)PCI_FIX_ADDR(addr) = v;
}
#define __raw_writeb __raw_writeb
static inline void __raw_writew(unsigned short v, volatile void __iomem *addr)
{
*(volatile unsigned short __force *)PCI_FIX_ADDR(addr) = v;
}
#define __raw_writew __raw_writew
static inline void __raw_writel(unsigned int v, volatile void __iomem *addr)
{
*(volatile unsigned int __force *)PCI_FIX_ADDR(addr) = v;
}
#define __raw_writel __raw_writel
#define _IO_PORT(port) ((volatile void __iomem *)(_IO_BASE + (port)))
#ifdef __powerpc64__
static inline unsigned long __raw_readq(const volatile void __iomem *addr)
{
return *(volatile unsigned long __force *)PCI_FIX_ADDR(addr);
}
#define __raw_readq __raw_readq
static inline void __raw_writeq(unsigned long v, volatile void __iomem *addr)
{
*(volatile unsigned long __force *)PCI_FIX_ADDR(addr) = v;
}
#define __raw_writeq __raw_writeq
static inline void __raw_writeq_be(unsigned long v, volatile void __iomem *addr)
{
__raw_writeq((__force unsigned long)cpu_to_be64(v), addr);
}
#define __raw_writeq_be __raw_writeq_be
/*
* Real mode versions of the above. Those instructions are only supposed
* Real mode versions of raw accessors. Those instructions are only supposed
* to be used in hypervisor real mode as per the architecture spec.
*/
static inline void __raw_rm_writeb(u8 val, volatile void __iomem *paddr)
@ -551,30 +433,23 @@ __do_out_asm(_rec_outl, "stwbrx")
* possible to hook directly at the toplevel PIO operation if they have to
* be handled differently
*/
#define __do_writeb(val, addr) out_8(PCI_FIX_ADDR(addr), val)
#define __do_writew(val, addr) out_le16(PCI_FIX_ADDR(addr), val)
#define __do_writel(val, addr) out_le32(PCI_FIX_ADDR(addr), val)
#define __do_writeq(val, addr) out_le64(PCI_FIX_ADDR(addr), val)
#define __do_writew_be(val, addr) out_be16(PCI_FIX_ADDR(addr), val)
#define __do_writel_be(val, addr) out_be32(PCI_FIX_ADDR(addr), val)
#define __do_writeq_be(val, addr) out_be64(PCI_FIX_ADDR(addr), val)
#ifdef CONFIG_EEH
#define __do_readb(addr) eeh_readb(PCI_FIX_ADDR(addr))
#define __do_readw(addr) eeh_readw(PCI_FIX_ADDR(addr))
#define __do_readl(addr) eeh_readl(PCI_FIX_ADDR(addr))
#define __do_readq(addr) eeh_readq(PCI_FIX_ADDR(addr))
#define __do_readw_be(addr) eeh_readw_be(PCI_FIX_ADDR(addr))
#define __do_readl_be(addr) eeh_readl_be(PCI_FIX_ADDR(addr))
#define __do_readq_be(addr) eeh_readq_be(PCI_FIX_ADDR(addr))
#define __do_readb(addr) eeh_readb(addr)
#define __do_readw(addr) eeh_readw(addr)
#define __do_readl(addr) eeh_readl(addr)
#define __do_readq(addr) eeh_readq(addr)
#define __do_readw_be(addr) eeh_readw_be(addr)
#define __do_readl_be(addr) eeh_readl_be(addr)
#define __do_readq_be(addr) eeh_readq_be(addr)
#else /* CONFIG_EEH */
#define __do_readb(addr) in_8(PCI_FIX_ADDR(addr))
#define __do_readw(addr) in_le16(PCI_FIX_ADDR(addr))
#define __do_readl(addr) in_le32(PCI_FIX_ADDR(addr))
#define __do_readq(addr) in_le64(PCI_FIX_ADDR(addr))
#define __do_readw_be(addr) in_be16(PCI_FIX_ADDR(addr))
#define __do_readl_be(addr) in_be32(PCI_FIX_ADDR(addr))
#define __do_readq_be(addr) in_be64(PCI_FIX_ADDR(addr))
#define __do_readb(addr) in_8(addr)
#define __do_readw(addr) in_le16(addr)
#define __do_readl(addr) in_le32(addr)
#define __do_readq(addr) in_le64(addr)
#define __do_readw_be(addr) in_be16(addr)
#define __do_readl_be(addr) in_be32(addr)
#define __do_readq_be(addr) in_be64(addr)
#endif /* !defined(CONFIG_EEH) */
#ifdef CONFIG_PPC32
@ -585,64 +460,185 @@ __do_out_asm(_rec_outl, "stwbrx")
#define __do_inw(port) _rec_inw(port)
#define __do_inl(port) _rec_inl(port)
#else /* CONFIG_PPC32 */
#define __do_outb(val, port) writeb(val,(PCI_IO_ADDR)(_IO_BASE+port));
#define __do_outw(val, port) writew(val,(PCI_IO_ADDR)(_IO_BASE+port));
#define __do_outl(val, port) writel(val,(PCI_IO_ADDR)(_IO_BASE+port));
#define __do_inb(port) readb((PCI_IO_ADDR)(_IO_BASE + port));
#define __do_inw(port) readw((PCI_IO_ADDR)(_IO_BASE + port));
#define __do_inl(port) readl((PCI_IO_ADDR)(_IO_BASE + port));
#define __do_outb(val, port) writeb(val,_IO_PORT(port));
#define __do_outw(val, port) writew(val,_IO_PORT(port));
#define __do_outl(val, port) writel(val,_IO_PORT(port));
#define __do_inb(port) readb(_IO_PORT(port));
#define __do_inw(port) readw(_IO_PORT(port));
#define __do_inl(port) readl(_IO_PORT(port));
#endif /* !CONFIG_PPC32 */
#ifdef CONFIG_EEH
#define __do_readsb(a, b, n) eeh_readsb(PCI_FIX_ADDR(a), (b), (n))
#define __do_readsw(a, b, n) eeh_readsw(PCI_FIX_ADDR(a), (b), (n))
#define __do_readsl(a, b, n) eeh_readsl(PCI_FIX_ADDR(a), (b), (n))
#define __do_readsb(a, b, n) eeh_readsb(a, (b), (n))
#define __do_readsw(a, b, n) eeh_readsw(a, (b), (n))
#define __do_readsl(a, b, n) eeh_readsl(a, (b), (n))
#else /* CONFIG_EEH */
#define __do_readsb(a, b, n) _insb(PCI_FIX_ADDR(a), (b), (n))
#define __do_readsw(a, b, n) _insw(PCI_FIX_ADDR(a), (b), (n))
#define __do_readsl(a, b, n) _insl(PCI_FIX_ADDR(a), (b), (n))
#define __do_readsb(a, b, n) _insb(a, (b), (n))
#define __do_readsw(a, b, n) _insw(a, (b), (n))
#define __do_readsl(a, b, n) _insl(a, (b), (n))
#endif /* !CONFIG_EEH */
#define __do_writesb(a, b, n) _outsb(PCI_FIX_ADDR(a),(b),(n))
#define __do_writesw(a, b, n) _outsw(PCI_FIX_ADDR(a),(b),(n))
#define __do_writesl(a, b, n) _outsl(PCI_FIX_ADDR(a),(b),(n))
#define __do_writesb(a, b, n) _outsb(a, (b), (n))
#define __do_writesw(a, b, n) _outsw(a, (b), (n))
#define __do_writesl(a, b, n) _outsl(a, (b), (n))
#define __do_insb(p, b, n) readsb((PCI_IO_ADDR)(_IO_BASE+(p)), (b), (n))
#define __do_insw(p, b, n) readsw((PCI_IO_ADDR)(_IO_BASE+(p)), (b), (n))
#define __do_insl(p, b, n) readsl((PCI_IO_ADDR)(_IO_BASE+(p)), (b), (n))
#define __do_outsb(p, b, n) writesb((PCI_IO_ADDR)(_IO_BASE+(p)),(b),(n))
#define __do_outsw(p, b, n) writesw((PCI_IO_ADDR)(_IO_BASE+(p)),(b),(n))
#define __do_outsl(p, b, n) writesl((PCI_IO_ADDR)(_IO_BASE+(p)),(b),(n))
#define __do_memset_io(addr, c, n) \
_memset_io(PCI_FIX_ADDR(addr), c, n)
#define __do_memcpy_toio(dst, src, n) \
_memcpy_toio(PCI_FIX_ADDR(dst), src, n)
#define __do_insb(p, b, n) readsb(_IO_PORT(p), (b), (n))
#define __do_insw(p, b, n) readsw(_IO_PORT(p), (b), (n))
#define __do_insl(p, b, n) readsl(_IO_PORT(p), (b), (n))
#define __do_outsb(p, b, n) writesb(_IO_PORT(p),(b),(n))
#define __do_outsw(p, b, n) writesw(_IO_PORT(p),(b),(n))
#define __do_outsl(p, b, n) writesl(_IO_PORT(p),(b),(n))
#ifdef CONFIG_EEH
#define __do_memcpy_fromio(dst, src, n) \
eeh_memcpy_fromio(dst, PCI_FIX_ADDR(src), n)
eeh_memcpy_fromio(dst, src, n)
#else /* CONFIG_EEH */
#define __do_memcpy_fromio(dst, src, n) \
_memcpy_fromio(dst,PCI_FIX_ADDR(src),n)
_memcpy_fromio(dst, src, n)
#endif /* !CONFIG_EEH */
#ifdef CONFIG_PPC_INDIRECT_PIO
#define DEF_PCI_HOOK_pio(x) x
#else
#define DEF_PCI_HOOK_pio(x) NULL
#endif
static inline u8 readb(const volatile void __iomem *addr)
{
return __do_readb(addr);
}
#define readb readb
#ifdef CONFIG_PPC_INDIRECT_MMIO
#define DEF_PCI_HOOK_mem(x) x
static inline u16 readw(const volatile void __iomem *addr)
{
return __do_readw(addr);
}
#define readw readw
static inline u32 readl(const volatile void __iomem *addr)
{
return __do_readl(addr);
}
#define readl readl
static inline u16 readw_be(const volatile void __iomem *addr)
{
return __do_readw_be(addr);
}
static inline u32 readl_be(const volatile void __iomem *addr)
{
return __do_readl_be(addr);
}
static inline void writeb(u8 val, volatile void __iomem *addr)
{
out_8(addr, val);
}
#define writeb writeb
static inline void writew(u16 val, volatile void __iomem *addr)
{
out_le16(addr, val);
}
#define writew writew
static inline void writel(u32 val, volatile void __iomem *addr)
{
out_le32(addr, val);
}
#define writel writel
static inline void writew_be(u16 val, volatile void __iomem *addr)
{
out_be16(addr, val);
}
static inline void writel_be(u32 val, volatile void __iomem *addr)
{
out_be32(addr, val);
}
static inline void readsb(const volatile void __iomem *a, void *b, unsigned long c)
{
__do_readsb(a, b, c);
}
#define readsb readsb
static inline void readsw(const volatile void __iomem *a, void *b, unsigned long c)
{
__do_readsw(a, b, c);
}
#define readsw readsw
static inline void readsl(const volatile void __iomem *a, void *b, unsigned long c)
{
__do_readsl(a, b, c);
}
#define readsl readsl
static inline void writesb(volatile void __iomem *a, const void *b, unsigned long c)
{
__do_writesb(a, b, c);
}
#define writesb writesb
static inline void writesw(volatile void __iomem *a, const void *b, unsigned long c)
{
__do_writesw(a, b, c);
}
#define writesw writesw
static inline void writesl(volatile void __iomem *a, const void *b, unsigned long c)
{
__do_writesl(a, b, c);
}
#define writesl writesl
static inline void memset_io(volatile void __iomem *a, int c, unsigned long n)
{
_memset_io(a, c, n);
}
#define memset_io memset_io
static inline void memcpy_fromio(void *d, const volatile void __iomem *s, unsigned long n)
{
__do_memcpy_fromio(d, s, n);
}
#define memcpy_fromio memcpy_fromio
static inline void memcpy_toio(volatile void __iomem *d, const void *s, unsigned long n)
{
_memcpy_toio(d, s, n);
}
#define memcpy_toio memcpy_toio
#ifdef __powerpc64__
static inline u64 readq(const volatile void __iomem *addr)
{
return __do_readq(addr);
}
static inline u64 readq_be(const volatile void __iomem *addr)
{
return __do_readq_be(addr);
}
static inline void writeq(u64 val, volatile void __iomem *addr)
{
out_le64(addr, val);
}
static inline void writeq_be(u64 val, volatile void __iomem *addr)
{
out_be64(addr, val);
}
#endif /* __powerpc64__ */
#ifdef CONFIG_PPC_INDIRECT_PIO
#define DEF_PCI_HOOK(x) x
#else
#define DEF_PCI_HOOK_mem(x) NULL
#define DEF_PCI_HOOK(x) NULL
#endif
/* Structure containing all the hooks */
extern struct ppc_pci_io {
#define DEF_PCI_AC_RET(name, ret, at, al, space, aa) ret (*name) at;
#define DEF_PCI_AC_NORET(name, at, al, space, aa) void (*name) at;
#define DEF_PCI_AC_RET(name, ret, at, al) ret (*name) at;
#define DEF_PCI_AC_NORET(name, at, al) void (*name) at;
#include <asm/io-defs.h>
@ -652,18 +648,18 @@ extern struct ppc_pci_io {
} ppc_pci_io;
/* The inline wrappers */
#define DEF_PCI_AC_RET(name, ret, at, al, space, aa) \
#define DEF_PCI_AC_RET(name, ret, at, al) \
static inline ret name at \
{ \
if (DEF_PCI_HOOK_##space(ppc_pci_io.name) != NULL) \
if (DEF_PCI_HOOK(ppc_pci_io.name) != NULL) \
return ppc_pci_io.name al; \
return __do_##name al; \
}
#define DEF_PCI_AC_NORET(name, at, al, space, aa) \
#define DEF_PCI_AC_NORET(name, at, al) \
static inline void name at \
{ \
if (DEF_PCI_HOOK_##space(ppc_pci_io.name) != NULL) \
if (DEF_PCI_HOOK(ppc_pci_io.name) != NULL) \
ppc_pci_io.name al; \
else \
__do_##name al; \
@ -674,21 +670,7 @@ static inline void name at \
#undef DEF_PCI_AC_RET
#undef DEF_PCI_AC_NORET
/* Some drivers check for the presence of readq & writeq with
* a #ifdef, so we make them happy here.
*/
#define readb readb
#define readw readw
#define readl readl
#define writeb writeb
#define writew writew
#define writel writel
#define readsb readsb
#define readsw readsw
#define readsl readsl
#define writesb writesb
#define writesw writesw
#define writesl writesl
// Signal to asm-generic/io.h that we have implemented these.
#define inb inb
#define inw inw
#define inl inl
@ -705,9 +687,6 @@ static inline void name at \
#define readq readq
#define writeq writeq
#endif
#define memset_io memset_io
#define memcpy_fromio memcpy_fromio
#define memcpy_toio memcpy_toio
/*
* We don't do relaxed operations yet, at least not with this semantic
@ -982,6 +961,14 @@ static inline void * bus_to_virt(unsigned long address)
#include <asm-generic/io.h>
#ifdef __powerpc64__
static inline void __raw_writeq_be(unsigned long v, volatile void __iomem *addr)
{
__raw_writeq((__force unsigned long)cpu_to_be64(v), addr);
}
#define __raw_writeq_be __raw_writeq_be
#endif // __powerpc64__
#endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_IO_H */

@ -317,12 +317,6 @@ extern void iommu_flush_tce(struct iommu_table *tbl);
extern enum dma_data_direction iommu_tce_direction(unsigned long tce);
extern unsigned long iommu_direction_to_tce_perm(enum dma_data_direction dir);
#ifdef CONFIG_PPC_CELL_NATIVE
extern bool iommu_fixed_is_weak;
#else
#define iommu_fixed_is_weak false
#endif
extern const struct dma_map_ops dma_iommu_ops;
#endif /* __KERNEL__ */

@ -29,6 +29,7 @@ extern cpumask_var_t node_to_cpumask_map[];
#ifdef CONFIG_MEMORY_HOTPLUG
extern unsigned long max_pfn;
u64 memory_hotplug_max(void);
u64 hot_add_drconf_memory_max(void);
#else
#define memory_hotplug_max() memblock_end_of_DRAM()
#endif

@ -65,6 +65,27 @@ static inline long register_dtl(unsigned long cpu, unsigned long vpa)
return vpa_call(H_VPA_REG_DTL, cpu, vpa);
}
static inline long htm_call(unsigned long flags, unsigned long target,
unsigned long operation, unsigned long param1,
unsigned long param2, unsigned long param3)
{
return plpar_hcall_norets(H_HTM, flags, target, operation,
param1, param2, param3);
}
static inline long htm_get_dump_hardware(unsigned long nodeindex,
unsigned long nodalchipindex, unsigned long coreindexonchip,
unsigned long type, unsigned long addr, unsigned long size,
unsigned long offset)
{
return htm_call(H_HTM_FLAGS_HARDWARE_TARGET,
H_HTM_TARGET_NODE_INDEX(nodeindex) |
H_HTM_TARGET_NODAL_CHIP_INDEX(nodalchipindex) |
H_HTM_TARGET_CORE_INDEX_ON_CHIP(coreindexonchip),
H_HTM_OP(H_HTM_OP_DUMP_DATA) | H_HTM_TYPE(type),
addr, size, offset);
}
extern void vpa_init(int cpu);
static inline long plpar_pte_enter(unsigned long flags,

@ -1,53 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
#ifndef _POWERPC_PMI_H
#define _POWERPC_PMI_H
/*
* Definitions for talking with PMI device on PowerPC
*
* PMI (Platform Management Interrupt) is a way to communicate
* with the BMC (Baseboard Management Controller) via interrupts.
* Unlike IPMI it is bidirectional and has a low latency.
*
* (C) Copyright IBM Deutschland Entwicklung GmbH 2005
*
* Author: Christian Krafft <krafft@de.ibm.com>
*/
#ifdef __KERNEL__
#define PMI_TYPE_FREQ_CHANGE 0x01
#define PMI_TYPE_POWER_BUTTON 0x02
#define PMI_READ_TYPE 0
#define PMI_READ_DATA0 1
#define PMI_READ_DATA1 2
#define PMI_READ_DATA2 3
#define PMI_WRITE_TYPE 4
#define PMI_WRITE_DATA0 5
#define PMI_WRITE_DATA1 6
#define PMI_WRITE_DATA2 7
#define PMI_ACK 0x80
#define PMI_TIMEOUT 100
typedef struct {
u8 type;
u8 data0;
u8 data1;
u8 data2;
} pmi_message_t;
struct pmi_handler {
struct list_head node;
u8 type;
void (*handle_pmi_message) (pmi_message_t);
};
int pmi_register_handler(struct pmi_handler *);
void pmi_unregister_handler(struct pmi_handler *);
int pmi_send_message(pmi_message_t);
#endif /* __KERNEL__ */
#endif /* _POWERPC_PMI_H */

@ -17,6 +17,8 @@
struct device_node;
struct property;
#define MIN_RMA 768 /* Minimum RMA (in MB) for CAS negotiation */
#define OF_DT_BEGIN_NODE 0x1 /* Start of node, full name */
#define OF_DT_END_NODE 0x2 /* End node */
#define OF_DT_PROP 0x3 /* Property: name off, size,

@ -215,8 +215,6 @@ spu_disable_spu (struct spu_context *ctx)
* and only intended to be used by the platform setup code.
*/
extern const struct spu_priv1_ops spu_priv1_mmio_ops;
extern const struct spu_management_ops spu_management_of_ops;
#endif /* __KERNEL__ */

@ -26,4 +26,6 @@
#define ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name) __PPC_SCT(name, "blr")
#define ARCH_DEFINE_STATIC_CALL_RET0_TRAMP(name) __PPC_SCT(name, "b .+20")
#define CALL_INSN_SIZE 4
#endif /* _ASM_POWERPC_STATIC_CALL_H */

@ -89,9 +89,6 @@ static inline unsigned long tb_ticks_since(unsigned long tstamp)
#define mulhdu(x, y) mul_u64_u64_shr(x, y, 64)
#endif
extern void div128_by_32(u64 dividend_high, u64 dividend_low,
unsigned divisor, struct div_result *dr);
extern void secondary_cpu_time_init(void);
extern void __init time_init(void);

@ -31,7 +31,6 @@
#ifdef CONFIG_PPC_ICP_NATIVE
extern int icp_native_init(void);
extern void icp_native_flush_interrupt(void);
extern void icp_native_cause_ipi_rm(int cpu);
#else
static inline int icp_native_init(void) { return -ENODEV; }
#endif

@ -12,13 +12,11 @@
#ifdef CONFIG_XMON
extern void xmon_setup(void);
void __init xmon_register_spus(struct list_head *list);
struct pt_regs;
extern int xmon(struct pt_regs *excp);
extern irqreturn_t xmon_irq(int, void *);
#else
static inline void xmon_setup(void) { }
static inline void xmon_register_spus(struct list_head *list) { }
#endif
#if defined(CONFIG_XMON) && defined(CONFIG_SMP)

@ -70,7 +70,7 @@ obj-y := cputable.o syscalls.o switch.o \
signal.o sysfs.o cacheinfo.o time.o \
prom.o traps.o setup-common.o \
udbg.o misc.o io.o misc_$(BITS).o \
of_platform.o prom_parse.o firmware.o \
prom_parse.o firmware.o \
hw_breakpoint_constraints.o interrupt.o \
kdebugfs.o stacktrace.o syscall.o
obj-y += ptrace/
@ -152,8 +152,6 @@ obj-$(CONFIG_PCI_MSI) += msi.o
obj-$(CONFIG_AUDIT) += audit.o
obj64-$(CONFIG_AUDIT) += compat_audit.o
obj-$(CONFIG_PPC_IO_WORKAROUNDS) += io-workarounds.o
obj-y += trace/
ifneq ($(CONFIG_PPC_INDIRECT_PIO),y)

@ -136,7 +136,7 @@ static bool dma_iommu_bypass_supported(struct device *dev, u64 mask)
struct pci_dev *pdev = to_pci_dev(dev);
struct pci_controller *phb = pci_bus_to_host(pdev->bus);
if (iommu_fixed_is_weak || !phb->controller_ops.iommu_bypass_supported)
if (!phb->controller_ops.iommu_bypass_supported)
return false;
return phb->controller_ops.iommu_bypass_supported(pdev, mask);
}

@ -2537,27 +2537,8 @@ EXC_REAL_NONE(0x1000, 0x100)
EXC_VIRT_NONE(0x5000, 0x100)
EXC_REAL_NONE(0x1100, 0x100)
EXC_VIRT_NONE(0x5100, 0x100)
#ifdef CONFIG_CBE_RAS
INT_DEFINE_BEGIN(cbe_system_error)
IVEC=0x1200
IHSRR=1
INT_DEFINE_END(cbe_system_error)
EXC_REAL_BEGIN(cbe_system_error, 0x1200, 0x100)
GEN_INT_ENTRY cbe_system_error, virt=0
EXC_REAL_END(cbe_system_error, 0x1200, 0x100)
EXC_VIRT_NONE(0x5200, 0x100)
EXC_COMMON_BEGIN(cbe_system_error_common)
GEN_COMMON cbe_system_error
addi r3,r1,STACK_INT_FRAME_REGS
bl CFUNC(cbe_system_error_exception)
b interrupt_return_hsrr
#else /* CONFIG_CBE_RAS */
EXC_REAL_NONE(0x1200, 0x100)
EXC_VIRT_NONE(0x5200, 0x100)
#endif
/**
* Interrupt 0x1300 - Instruction Address Breakpoint Interrupt.
@ -2708,26 +2689,8 @@ EXC_COMMON_BEGIN(denorm_exception_common)
b interrupt_return_hsrr
#ifdef CONFIG_CBE_RAS
INT_DEFINE_BEGIN(cbe_maintenance)
IVEC=0x1600
IHSRR=1
INT_DEFINE_END(cbe_maintenance)
EXC_REAL_BEGIN(cbe_maintenance, 0x1600, 0x100)
GEN_INT_ENTRY cbe_maintenance, virt=0
EXC_REAL_END(cbe_maintenance, 0x1600, 0x100)
EXC_VIRT_NONE(0x5600, 0x100)
EXC_COMMON_BEGIN(cbe_maintenance_common)
GEN_COMMON cbe_maintenance
addi r3,r1,STACK_INT_FRAME_REGS
bl CFUNC(cbe_maintenance_exception)
b interrupt_return_hsrr
#else /* CONFIG_CBE_RAS */
EXC_REAL_NONE(0x1600, 0x100)
EXC_VIRT_NONE(0x5600, 0x100)
#endif
INT_DEFINE_BEGIN(altivec_assist)
@ -2755,26 +2718,8 @@ EXC_COMMON_BEGIN(altivec_assist_common)
b interrupt_return_srr
#ifdef CONFIG_CBE_RAS
INT_DEFINE_BEGIN(cbe_thermal)
IVEC=0x1800
IHSRR=1
INT_DEFINE_END(cbe_thermal)
EXC_REAL_BEGIN(cbe_thermal, 0x1800, 0x100)
GEN_INT_ENTRY cbe_thermal, virt=0
EXC_REAL_END(cbe_thermal, 0x1800, 0x100)
EXC_VIRT_NONE(0x5800, 0x100)
EXC_COMMON_BEGIN(cbe_thermal_common)
GEN_COMMON cbe_thermal
addi r3,r1,STACK_INT_FRAME_REGS
bl CFUNC(cbe_thermal_exception)
b interrupt_return_hsrr
#else /* CONFIG_CBE_RAS */
EXC_REAL_NONE(0x1800, 0x100)
EXC_VIRT_NONE(0x5800, 0x100)
#endif
#ifdef CONFIG_PPC_WATCHDOG

@ -33,6 +33,7 @@
#include <asm/fadump-internal.h>
#include <asm/setup.h>
#include <asm/interrupt.h>
#include <asm/prom.h>
/*
* The CPU who acquired the lock to trigger the fadump crash should
@ -1764,19 +1765,19 @@ void __init fadump_setup_param_area(void)
range_end = memblock_end_of_DRAM();
} else {
/*
* Passing additional parameters is supported for hash MMU only
* if the first memory block size is 768MB or higher.
* Memory range for passing additional parameters for HASH MMU
* must meet the following conditions:
* 1. The first memory block size must be higher than the
* minimum RMA (MIN_RMA) size. Bootloader can use memory
* upto RMA size. So it should be avoided.
* 2. The range should be between MIN_RMA and RMA size (ppc64_rma_size)
* 3. It must not overlap with the fadump reserved area.
*/
if (ppc64_rma_size < 0x30000000)
if (ppc64_rma_size < MIN_RMA*1024*1024)
return;
/*
* 640 MB to 768 MB is not used by PFW/bootloader. So, try reserving
* memory for passing additional parameters in this range to avoid
* being stomped on by PFW/bootloader.
*/
range_start = 0x2A000000;
range_end = range_start + 0x4000000;
range_start = MIN_RMA * 1024 * 1024;
range_end = min(ppc64_rma_size, fw_dump.boot_mem_top);
}
fw_dump.param_area = memblock_phys_alloc_range(COMMAND_LINE_SIZE,

@ -1,197 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Support PCI IO workaround
*
* Copyright (C) 2006 Benjamin Herrenschmidt <benh@kernel.crashing.org>
* IBM, Corp.
* (C) Copyright 2007-2008 TOSHIBA CORPORATION
*/
#undef DEBUG
#include <linux/kernel.h>
#include <linux/sched/mm.h> /* for init_mm */
#include <linux/pgtable.h>
#include <asm/io.h>
#include <asm/machdep.h>
#include <asm/ppc-pci.h>
#include <asm/io-workarounds.h>
#include <asm/pte-walk.h>
#define IOWA_MAX_BUS 8
static struct iowa_bus iowa_busses[IOWA_MAX_BUS];
static unsigned int iowa_bus_count;
static struct iowa_bus *iowa_pci_find(unsigned long vaddr, unsigned long paddr)
{
int i, j;
struct resource *res;
unsigned long vstart, vend;
for (i = 0; i < iowa_bus_count; i++) {
struct iowa_bus *bus = &iowa_busses[i];
struct pci_controller *phb = bus->phb;
if (vaddr) {
vstart = (unsigned long)phb->io_base_virt;
vend = vstart + phb->pci_io_size - 1;
if ((vaddr >= vstart) && (vaddr <= vend))
return bus;
}
if (paddr)
for (j = 0; j < 3; j++) {
res = &phb->mem_resources[j];
if (paddr >= res->start && paddr <= res->end)
return bus;
}
}
return NULL;
}
#ifdef CONFIG_PPC_INDIRECT_MMIO
struct iowa_bus *iowa_mem_find_bus(const PCI_IO_ADDR addr)
{
struct iowa_bus *bus;
int token;
token = PCI_GET_ADDR_TOKEN(addr);
if (token && token <= iowa_bus_count)
bus = &iowa_busses[token - 1];
else {
unsigned long vaddr, paddr;
vaddr = (unsigned long)PCI_FIX_ADDR(addr);
if (vaddr < PHB_IO_BASE || vaddr >= PHB_IO_END)
return NULL;
paddr = ppc_find_vmap_phys(vaddr);
bus = iowa_pci_find(vaddr, paddr);
if (bus == NULL)
return NULL;
}
return bus;
}
#else /* CONFIG_PPC_INDIRECT_MMIO */
struct iowa_bus *iowa_mem_find_bus(const PCI_IO_ADDR addr)
{
return NULL;
}
#endif /* !CONFIG_PPC_INDIRECT_MMIO */
#ifdef CONFIG_PPC_INDIRECT_PIO
struct iowa_bus *iowa_pio_find_bus(unsigned long port)
{
unsigned long vaddr = (unsigned long)pci_io_base + port;
return iowa_pci_find(vaddr, 0);
}
#else
struct iowa_bus *iowa_pio_find_bus(unsigned long port)
{
return NULL;
}
#endif
#define DEF_PCI_AC_RET(name, ret, at, al, space, aa) \
static ret iowa_##name at \
{ \
struct iowa_bus *bus; \
bus = iowa_##space##_find_bus(aa); \
if (bus && bus->ops && bus->ops->name) \
return bus->ops->name al; \
return __do_##name al; \
}
#define DEF_PCI_AC_NORET(name, at, al, space, aa) \
static void iowa_##name at \
{ \
struct iowa_bus *bus; \
bus = iowa_##space##_find_bus(aa); \
if (bus && bus->ops && bus->ops->name) { \
bus->ops->name al; \
return; \
} \
__do_##name al; \
}
#include <asm/io-defs.h>
#undef DEF_PCI_AC_RET
#undef DEF_PCI_AC_NORET
static const struct ppc_pci_io iowa_pci_io = {
#define DEF_PCI_AC_RET(name, ret, at, al, space, aa) .name = iowa_##name,
#define DEF_PCI_AC_NORET(name, at, al, space, aa) .name = iowa_##name,
#include <asm/io-defs.h>
#undef DEF_PCI_AC_RET
#undef DEF_PCI_AC_NORET
};
#ifdef CONFIG_PPC_INDIRECT_MMIO
void __iomem *iowa_ioremap(phys_addr_t addr, unsigned long size,
pgprot_t prot, void *caller)
{
struct iowa_bus *bus;
void __iomem *res = __ioremap_caller(addr, size, prot, caller);
int busno;
bus = iowa_pci_find(0, (unsigned long)addr);
if (bus != NULL) {
busno = bus - iowa_busses;
PCI_SET_ADDR_TOKEN(res, busno + 1);
}
return res;
}
#endif /* !CONFIG_PPC_INDIRECT_MMIO */
bool io_workaround_inited;
/* Enable IO workaround */
static void io_workaround_init(void)
{
if (io_workaround_inited)
return;
ppc_pci_io = iowa_pci_io;
io_workaround_inited = true;
}
/* Register new bus to support workaround */
void iowa_register_bus(struct pci_controller *phb, struct ppc_pci_io *ops,
int (*initfunc)(struct iowa_bus *, void *), void *data)
{
struct iowa_bus *bus;
struct device_node *np = phb->dn;
io_workaround_init();
if (iowa_bus_count >= IOWA_MAX_BUS) {
pr_err("IOWA:Too many pci bridges, "
"workarounds disabled for %pOF\n", np);
return;
}
bus = &iowa_busses[iowa_bus_count];
bus->phb = phb;
bus->ops = ops;
bus->private = data;
if (initfunc)
if ((*initfunc)(bus, data))
return;
iowa_bus_count++;
pr_debug("IOWA:[%d]Add bus, %pOF.\n", iowa_bus_count-1, np);
}

@ -31,13 +31,14 @@ void _insb(const volatile u8 __iomem *port, void *buf, long count)
if (unlikely(count <= 0))
return;
asm volatile("sync");
mb();
do {
tmp = *(const volatile u8 __force *)port;
eieio();
*tbuf++ = tmp;
} while (--count != 0);
asm volatile("twi 0,%0,0; isync" : : "r" (tmp));
data_barrier(tmp);
}
EXPORT_SYMBOL(_insb);
@ -47,75 +48,80 @@ void _outsb(volatile u8 __iomem *port, const void *buf, long count)
if (unlikely(count <= 0))
return;
asm volatile("sync");
mb();
do {
*(volatile u8 __force *)port = *tbuf++;
} while (--count != 0);
asm volatile("sync");
mb();
}
EXPORT_SYMBOL(_outsb);
void _insw_ns(const volatile u16 __iomem *port, void *buf, long count)
void _insw(const volatile u16 __iomem *port, void *buf, long count)
{
u16 *tbuf = buf;
u16 tmp;
if (unlikely(count <= 0))
return;
asm volatile("sync");
mb();
do {
tmp = *(const volatile u16 __force *)port;
eieio();
*tbuf++ = tmp;
} while (--count != 0);
asm volatile("twi 0,%0,0; isync" : : "r" (tmp));
data_barrier(tmp);
}
EXPORT_SYMBOL(_insw_ns);
EXPORT_SYMBOL(_insw);
void _outsw_ns(volatile u16 __iomem *port, const void *buf, long count)
void _outsw(volatile u16 __iomem *port, const void *buf, long count)
{
const u16 *tbuf = buf;
if (unlikely(count <= 0))
return;
asm volatile("sync");
mb();
do {
*(volatile u16 __force *)port = *tbuf++;
} while (--count != 0);
asm volatile("sync");
mb();
}
EXPORT_SYMBOL(_outsw_ns);
EXPORT_SYMBOL(_outsw);
void _insl_ns(const volatile u32 __iomem *port, void *buf, long count)
void _insl(const volatile u32 __iomem *port, void *buf, long count)
{
u32 *tbuf = buf;
u32 tmp;
if (unlikely(count <= 0))
return;
asm volatile("sync");
mb();
do {
tmp = *(const volatile u32 __force *)port;
eieio();
*tbuf++ = tmp;
} while (--count != 0);
asm volatile("twi 0,%0,0; isync" : : "r" (tmp));
data_barrier(tmp);
}
EXPORT_SYMBOL(_insl_ns);
EXPORT_SYMBOL(_insl);
void _outsl_ns(volatile u32 __iomem *port, const void *buf, long count)
void _outsl(volatile u32 __iomem *port, const void *buf, long count)
{
const u32 *tbuf = buf;
if (unlikely(count <= 0))
return;
asm volatile("sync");
mb();
do {
*(volatile u32 __force *)port = *tbuf++;
} while (--count != 0);
asm volatile("sync");
mb();
}
EXPORT_SYMBOL(_outsl_ns);
EXPORT_SYMBOL(_outsl);
#define IO_CHECK_ALIGN(v,a) ((((unsigned long)(v)) & ((a) - 1)) == 0)
@ -127,7 +133,7 @@ _memset_io(volatile void __iomem *addr, int c, unsigned long n)
lc |= lc << 8;
lc |= lc << 16;
__asm__ __volatile__ ("sync" : : : "memory");
mb();
while(n && !IO_CHECK_ALIGN(p, 4)) {
*((volatile u8 *)p) = c;
p++;
@ -143,7 +149,7 @@ _memset_io(volatile void __iomem *addr, int c, unsigned long n)
p++;
n--;
}
__asm__ __volatile__ ("sync" : : : "memory");
mb();
}
EXPORT_SYMBOL(_memset_io);
@ -152,7 +158,7 @@ void _memcpy_fromio(void *dest, const volatile void __iomem *src,
{
void *vsrc = (void __force *) src;
__asm__ __volatile__ ("sync" : : : "memory");
mb();
while(n && (!IO_CHECK_ALIGN(vsrc, 4) || !IO_CHECK_ALIGN(dest, 4))) {
*((u8 *)dest) = *((volatile u8 *)vsrc);
eieio();
@ -174,7 +180,7 @@ void _memcpy_fromio(void *dest, const volatile void __iomem *src,
dest++;
n--;
}
__asm__ __volatile__ ("sync" : : : "memory");
mb();
}
EXPORT_SYMBOL(_memcpy_fromio);
@ -182,7 +188,7 @@ void _memcpy_toio(volatile void __iomem *dest, const void *src, unsigned long n)
{
void *vdest = (void __force *) dest;
__asm__ __volatile__ ("sync" : : : "memory");
mb();
while(n && (!IO_CHECK_ALIGN(vdest, 4) || !IO_CHECK_ALIGN(src, 4))) {
*((volatile u8 *)vdest) = *((u8 *)src);
src++;
@ -201,6 +207,6 @@ void _memcpy_toio(volatile void __iomem *dest, const void *src, unsigned long n)
vdest++;
n--;
}
__asm__ __volatile__ ("sync" : : : "memory");
mb();
}
EXPORT_SYMBOL(_memcpy_toio);

@ -1,102 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Copyright (C) 2006 Benjamin Herrenschmidt, IBM Corp.
* <benh@kernel.crashing.org>
* and Arnd Bergmann, IBM Corp.
*/
#undef DEBUG
#include <linux/string.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/export.h>
#include <linux/mod_devicetable.h>
#include <linux/pci.h>
#include <linux/platform_device.h>
#include <linux/atomic.h>
#include <asm/errno.h>
#include <asm/topology.h>
#include <asm/pci-bridge.h>
#include <asm/ppc-pci.h>
#include <asm/eeh.h>
#ifdef CONFIG_PPC_OF_PLATFORM_PCI
/* The probing of PCI controllers from of_platform is currently
* 64 bits only, mostly due to gratuitous differences between
* the 32 and 64 bits PCI code on PowerPC and the 32 bits one
* lacking some bits needed here.
*/
static int of_pci_phb_probe(struct platform_device *dev)
{
struct pci_controller *phb;
/* Check if we can do that ... */
if (ppc_md.pci_setup_phb == NULL)
return -ENODEV;
pr_info("Setting up PCI bus %pOF\n", dev->dev.of_node);
/* Alloc and setup PHB data structure */
phb = pcibios_alloc_controller(dev->dev.of_node);
if (!phb)
return -ENODEV;
/* Setup parent in sysfs */
phb->parent = &dev->dev;
/* Setup the PHB using arch provided callback */
if (ppc_md.pci_setup_phb(phb)) {
pcibios_free_controller(phb);
return -ENODEV;
}
/* Process "ranges" property */
pci_process_bridge_OF_ranges(phb, dev->dev.of_node, 0);
/* Init pci_dn data structures */
pci_devs_phb_init_dynamic(phb);
/* Create EEH PE for the PHB */
eeh_phb_pe_create(phb);
/* Scan the bus */
pcibios_scan_phb(phb);
if (phb->bus == NULL)
return -ENXIO;
/* Claim resources. This might need some rework as well depending
* whether we are doing probe-only or not, like assigning unassigned
* resources etc...
*/
pcibios_claim_one_bus(phb->bus);
/* Add probed PCI devices to the device model */
pci_bus_add_devices(phb->bus);
return 0;
}
static const struct of_device_id of_pci_phb_ids[] = {
{ .type = "pci", },
{ .type = "pcix", },
{ .type = "pcie", },
{ .type = "pciex", },
{ .type = "ht", },
{}
};
static struct platform_driver of_pci_phb_driver = {
.probe = of_pci_phb_probe,
.driver = {
.name = "of-pci",
.of_match_table = of_pci_phb_ids,
},
};
builtin_platform_driver(of_pci_phb_driver);
#endif /* CONFIG_PPC_OF_PLATFORM_PCI */

@ -1061,7 +1061,7 @@ static const struct ibm_arch_vec ibm_architecture_vec_template __initconst = {
.virt_base = cpu_to_be32(0xffffffff),
.virt_size = cpu_to_be32(0xffffffff),
.load_base = cpu_to_be32(0xffffffff),
.min_rma = cpu_to_be32(512), /* 512MB min RMA */
.min_rma = cpu_to_be32(MIN_RMA),
.min_load = cpu_to_be32(0xffffffff), /* full client load */
.min_rma_percent = 0, /* min RMA percentage of total RAM */
.max_pft_size = 48, /* max log_2(hash table size) */
@ -2889,11 +2889,11 @@ static void __init fixup_device_tree_pmac(void)
char type[8];
phandle node;
// Some pmacs are missing #size-cells on escc nodes
// Some pmacs are missing #size-cells on escc or i2s nodes
for (node = 0; prom_next_node(&node); ) {
type[0] = '\0';
prom_getprop(node, "device_type", type, sizeof(type));
if (prom_strcmp(type, "escc"))
if (prom_strcmp(type, "escc") && prom_strcmp(type, "i2s"))
continue;
if (prom_getproplen(node, "#size-cells") != PROM_ERROR)

@ -798,66 +798,6 @@ void __init udbg_init_rtas_panel(void)
udbg_putc = call_rtas_display_status_delay;
}
#ifdef CONFIG_UDBG_RTAS_CONSOLE
/* If you think you're dying before early_init_dt_scan_rtas() does its
* work, you can hard code the token values for your firmware here and
* hardcode rtas.base/entry etc.
*/
static unsigned int rtas_putchar_token = RTAS_UNKNOWN_SERVICE;
static unsigned int rtas_getchar_token = RTAS_UNKNOWN_SERVICE;
static void udbg_rtascon_putc(char c)
{
int tries;
if (!rtas.base)
return;
/* Add CRs before LFs */
if (c == '\n')
udbg_rtascon_putc('\r');
/* if there is more than one character to be displayed, wait a bit */
for (tries = 0; tries < 16; tries++) {
if (rtas_call(rtas_putchar_token, 1, 1, NULL, c) == 0)
break;
udelay(1000);
}
}
static int udbg_rtascon_getc_poll(void)
{
int c;
if (!rtas.base)
return -1;
if (rtas_call(rtas_getchar_token, 0, 2, &c))
return -1;
return c;
}
static int udbg_rtascon_getc(void)
{
int c;
while ((c = udbg_rtascon_getc_poll()) == -1)
;
return c;
}
void __init udbg_init_rtas_console(void)
{
udbg_putc = udbg_rtascon_putc;
udbg_getc = udbg_rtascon_getc;
udbg_getc_poll = udbg_rtascon_getc_poll;
}
#endif /* CONFIG_UDBG_RTAS_CONSOLE */
void rtas_progress(char *s, unsigned short hex)
{
struct device_node *root;
@ -2135,21 +2075,6 @@ int __init early_init_dt_scan_rtas(unsigned long node,
rtas.size = *sizep;
}
#ifdef CONFIG_UDBG_RTAS_CONSOLE
basep = of_get_flat_dt_prop(node, "put-term-char", NULL);
if (basep)
rtas_putchar_token = *basep;
basep = of_get_flat_dt_prop(node, "get-term-char", NULL);
if (basep)
rtas_getchar_token = *basep;
if (rtas_putchar_token != RTAS_UNKNOWN_SERVICE &&
rtas_getchar_token != RTAS_UNKNOWN_SERVICE)
udbg_init_rtas_console();
#endif
/* break now */
return 1;
}

@ -892,7 +892,7 @@ unsigned long memory_block_size_bytes(void)
}
#endif
#if defined(CONFIG_PPC_INDIRECT_PIO) || defined(CONFIG_PPC_INDIRECT_MMIO)
#ifdef CONFIG_PPC_INDIRECT_PIO
struct ppc_pci_io ppc_pci_io;
EXPORT_SYMBOL(ppc_pci_io);
#endif

@ -8,26 +8,54 @@ void arch_static_call_transform(void *site, void *tramp, void *func, bool tail)
{
int err;
bool is_ret0 = (func == __static_call_return0);
unsigned long target = (unsigned long)(is_ret0 ? tramp + PPC_SCT_RET0 : func);
bool is_short = is_offset_in_branch_range((long)target - (long)tramp);
if (!tramp)
return;
unsigned long _tramp = (unsigned long)tramp;
unsigned long _func = (unsigned long)func;
unsigned long _ret0 = _tramp + PPC_SCT_RET0;
bool is_short = is_offset_in_branch_range((long)func - (long)(site ? : tramp));
mutex_lock(&text_mutex);
if (func && !is_short) {
err = patch_ulong(tramp + PPC_SCT_DATA, target);
if (err)
goto out;
if (site && tail) {
if (!func)
err = patch_instruction(site, ppc_inst(PPC_RAW_BLR()));
else if (is_ret0)
err = patch_branch(site, _ret0, 0);
else if (is_short)
err = patch_branch(site, _func, 0);
else if (tramp)
err = patch_branch(site, _tramp, 0);
else
err = 0;
} else if (site) {
if (!func)
err = patch_instruction(site, ppc_inst(PPC_RAW_NOP()));
else if (is_ret0)
err = patch_instruction(site, ppc_inst(PPC_RAW_LI(_R3, 0)));
else if (is_short)
err = patch_branch(site, _func, BRANCH_SET_LINK);
else if (tramp)
err = patch_branch(site, _tramp, BRANCH_SET_LINK);
else
err = 0;
} else if (tramp) {
if (func && !is_short) {
err = patch_ulong(tramp + PPC_SCT_DATA, _func);
if (err)
goto out;
}
if (!func)
err = patch_instruction(tramp, ppc_inst(PPC_RAW_BLR()));
else if (is_ret0)
err = patch_branch(tramp, _ret0, 0);
else if (is_short)
err = patch_branch(tramp, _func, 0);
else
err = patch_instruction(tramp, ppc_inst(PPC_RAW_NOP()));
} else {
err = 0;
}
if (!func)
err = patch_instruction(tramp, ppc_inst(PPC_RAW_BLR()));
else if (is_short)
err = patch_branch(tramp, target, 0);
else
err = patch_instruction(tramp, ppc_inst(PPC_RAW_NOP()));
out:
mutex_unlock(&text_mutex);

@ -39,7 +39,6 @@ flush_branch_caches:
// Flush the link stack
.rept 64
ANNOTATE_INTRA_FUNCTION_CALL
bl .+4
.endr
b 1f

@ -901,6 +901,38 @@ void secondary_cpu_time_init(void)
register_decrementer_clockevent(smp_processor_id());
}
/*
* Divide a 128-bit dividend by a 32-bit divisor, leaving a 128 bit
* result.
*/
static __init void div128_by_32(u64 dividend_high, u64 dividend_low,
unsigned int divisor, struct div_result *dr)
{
unsigned long a, b, c, d;
unsigned long w, x, y, z;
u64 ra, rb, rc;
a = dividend_high >> 32;
b = dividend_high & 0xffffffff;
c = dividend_low >> 32;
d = dividend_low & 0xffffffff;
w = a / divisor;
ra = ((u64)(a - (w * divisor)) << 32) + b;
rb = ((u64)do_div(ra, divisor) << 32) + c;
x = ra;
rc = ((u64)do_div(rb, divisor) << 32) + d;
y = rb;
do_div(rc, divisor);
z = rc;
dr->result_high = ((u64)w << 32) + x;
dr->result_low = ((u64)y << 32) + z;
}
/* This function is only called on the boot processor */
void __init time_init(void)
{
@ -974,39 +1006,6 @@ void __init time_init(void)
enable_sched_clock_irqtime();
}
/*
* Divide a 128-bit dividend by a 32-bit divisor, leaving a 128 bit
* result.
*/
void div128_by_32(u64 dividend_high, u64 dividend_low,
unsigned divisor, struct div_result *dr)
{
unsigned long a, b, c, d;
unsigned long w, x, y, z;
u64 ra, rb, rc;
a = dividend_high >> 32;
b = dividend_high & 0xffffffff;
c = dividend_low >> 32;
d = dividend_low & 0xffffffff;
w = a / divisor;
ra = ((u64)(a - (w * divisor)) << 32) + b;
rb = ((u64) do_div(ra, divisor) << 32) + c;
x = ra;
rc = ((u64) do_div(rb, divisor) << 32) + d;
y = rb;
do_div(rc, divisor);
z = rc;
dr->result_high = ((u64)w << 32) + x;
dr->result_low = ((u64)y << 32) + z;
}
/* We don't need to calibrate delay, we use the CPU timebase for that */
void calibrate_delay(void)
{

@ -36,9 +36,6 @@ void __init udbg_early_init(void)
#elif defined(CONFIG_PPC_EARLY_DEBUG_RTAS_PANEL)
/* RTAS panel debug */
udbg_init_rtas_panel();
#elif defined(CONFIG_PPC_EARLY_DEBUG_RTAS_CONSOLE)
/* RTAS console debug */
udbg_init_rtas_console();
#elif defined(CONFIG_PPC_EARLY_DEBUG_PAS_REALMODE)
udbg_init_pas_realmode();
#elif defined(CONFIG_PPC_EARLY_DEBUG_BOOTX)

@ -1,10 +1,4 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifdef CONFIG_PPC64
#define PROVIDE32(x) PROVIDE(__unused__##x)
#else
#define PROVIDE32(x) PROVIDE(x)
#endif
#define BSS_FIRST_SECTIONS *(.bss.prominit)
#define EMITS_PT_NOTE
#define RO_EXCEPTION_TABLE_ALIGN 0
@ -127,7 +121,6 @@ SECTIONS
. = ALIGN(PAGE_SIZE);
_etext = .;
PROVIDE32 (etext = .);
/* Read-only data */
RO_DATA(PAGE_SIZE)
@ -394,7 +387,6 @@ SECTIONS
. = ALIGN(PAGE_SIZE);
_edata = .;
PROVIDE32 (edata = .);
/*
* And finally the bss
@ -404,7 +396,6 @@ SECTIONS
. = ALIGN(PAGE_SIZE);
_end = . ;
PROVIDE32 (end = .);
DWARF_DEBUG
ELF_DETAILS

@ -348,16 +348,13 @@ write_utlb:
rlwinm r10, r24, 0, 22, 27
cmpwi r10, PPC47x_TLB0_4K
bne 0f
li r10, 0x1000 /* r10 = 4k */
ANNOTATE_INTRA_FUNCTION_CALL
bl 1f
beq 0f
0:
/* Defaults to 256M */
lis r10, 0x1000
bcl 20,31,$+4
0: bcl 20,31,$+4
1: mflr r4
addi r4, r4, (2f-1b) /* virtual address of 2f */

@ -125,8 +125,6 @@ static u32 *kvmppc_mmu_get_pteg(struct kvm_vcpu *vcpu, u32 vsid, u32 eaddr,
return (u32*)pteg;
}
extern char etext[];
int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte,
bool iswrite)
{

@ -1524,14 +1524,12 @@ kvm_flush_link_stack:
/* Flush the link stack. On Power8 it's up to 32 entries in size. */
.rept 32
ANNOTATE_INTRA_FUNCTION_CALL
bl .+4
.endr
/* And on Power9 it's up to 64. */
BEGIN_FTR_SECTION
.rept 32
ANNOTATE_INTRA_FUNCTION_CALL
bl .+4
.endr
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)

@ -550,12 +550,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
#ifdef CONFIG_PPC_BOOK3S_64
case KVM_CAP_SPAPR_TCE:
fallthrough;
case KVM_CAP_SPAPR_TCE_64:
r = 1;
break;
case KVM_CAP_SPAPR_TCE_VFIO:
r = !!cpu_has_feature(CPU_FTR_HVMODE);
break;
case KVM_CAP_PPC_RTAS:
case KVM_CAP_PPC_FIXUP_HCALL:
case KVM_CAP_PPC_ENABLE_HCALL:

@ -1358,18 +1358,6 @@ static void __init htab_initialize(void)
} else {
unsigned long limit = MEMBLOCK_ALLOC_ANYWHERE;
#ifdef CONFIG_PPC_CELL
/*
* Cell may require the hash table down low when using the
* Axon IOMMU in order to fit the dynamic region over it, see
* comments in cell/iommu.c
*/
if (fdt_subnode_offset(initial_boot_params, 0, "axon") > 0) {
limit = 0x80000000;
pr_info("Hash table forced below 2G for Axon IOMMU\n");
}
#endif /* CONFIG_PPC_CELL */
table = memblock_phys_alloc_range(htab_size_bytes,
htab_size_bytes,
0, limit);

@ -587,7 +587,7 @@ int pmd_move_must_withdraw(struct spinlock *new_pmd_ptl,
/*
* Does the CPU support tlbie?
*/
bool tlbie_capable __read_mostly = true;
bool tlbie_capable __read_mostly = IS_ENABLED(CONFIG_PPC_RADIX_BROADCAST_TLBIE);
EXPORT_SYMBOL(tlbie_capable);
/*
@ -595,7 +595,7 @@ EXPORT_SYMBOL(tlbie_capable);
* address spaces? tlbie may still be used for nMMU accelerators, and for KVM
* guest address spaces.
*/
bool tlbie_enabled __read_mostly = true;
bool tlbie_enabled __read_mostly = IS_ENABLED(CONFIG_PPC_RADIX_BROADCAST_TLBIE);
static int __init setup_disable_tlbie(char *str)
{

@ -4,7 +4,6 @@
#include <linux/slab.h>
#include <linux/mmzone.h>
#include <linux/vmalloc.h>
#include <asm/io-workarounds.h>
unsigned long ioremap_bot;
EXPORT_SYMBOL(ioremap_bot);
@ -14,8 +13,6 @@ void __iomem *ioremap(phys_addr_t addr, unsigned long size)
pgprot_t prot = pgprot_noncached(PAGE_KERNEL);
void *caller = __builtin_return_address(0);
if (iowa_is_active())
return iowa_ioremap(addr, size, prot, caller);
return __ioremap_caller(addr, size, prot, caller);
}
EXPORT_SYMBOL(ioremap);
@ -25,8 +22,6 @@ void __iomem *ioremap_wc(phys_addr_t addr, unsigned long size)
pgprot_t prot = pgprot_noncached_wc(PAGE_KERNEL);
void *caller = __builtin_return_address(0);
if (iowa_is_active())
return iowa_ioremap(addr, size, prot, caller);
return __ioremap_caller(addr, size, prot, caller);
}
EXPORT_SYMBOL(ioremap_wc);
@ -36,8 +31,6 @@ void __iomem *ioremap_coherent(phys_addr_t addr, unsigned long size)
pgprot_t prot = pgprot_cached(PAGE_KERNEL);
void *caller = __builtin_return_address(0);
if (iowa_is_active())
return iowa_ioremap(addr, size, prot, caller);
return __ioremap_caller(addr, size, prot, caller);
}
@ -50,8 +43,6 @@ void __iomem *ioremap_prot(phys_addr_t addr, size_t size, unsigned long flags)
if (pte_write(pte))
pte = pte_mkdirty(pte);
if (iowa_is_active())
return iowa_ioremap(addr, size, pte_pgprot(pte), caller);
return __ioremap_caller(addr, size, pte_pgprot(pte), caller);
}
EXPORT_SYMBOL(ioremap_prot);

@ -52,6 +52,6 @@ void iounmap(volatile void __iomem *token)
if (!slab_is_available())
return;
generic_iounmap(PCI_FIX_ADDR(token));
generic_iounmap(token);
}
EXPORT_SYMBOL(iounmap);

@ -319,28 +319,6 @@ void __init mem_init(void)
per_cpu(next_tlbcam_idx, smp_processor_id()) =
(mfspr(SPRN_TLB1CFG) & TLBnCFG_N_ENTRY) - 1;
#endif
#ifdef CONFIG_PPC32
pr_info("Kernel virtual memory layout:\n");
#ifdef CONFIG_KASAN
pr_info(" * 0x%08lx..0x%08lx : kasan shadow mem\n",
KASAN_SHADOW_START, KASAN_SHADOW_END);
#endif
pr_info(" * 0x%08lx..0x%08lx : fixmap\n", FIXADDR_START, FIXADDR_TOP);
#ifdef CONFIG_HIGHMEM
pr_info(" * 0x%08lx..0x%08lx : highmem PTEs\n",
PKMAP_BASE, PKMAP_ADDR(LAST_PKMAP));
#endif /* CONFIG_HIGHMEM */
if (ioremap_bot != IOREMAP_TOP)
pr_info(" * 0x%08lx..0x%08lx : early ioremap\n",
ioremap_bot, IOREMAP_TOP);
pr_info(" * 0x%08lx..0x%08lx : vmalloc & ioremap\n",
VMALLOC_START, VMALLOC_END);
#ifdef MODULES_VADDR
pr_info(" * 0x%08lx..0x%08lx : modules\n",
MODULES_VADDR, MODULES_END);
#endif
#endif /* CONFIG_PPC32 */
}
void free_initmem(void)

@ -1336,7 +1336,7 @@ int hot_add_scn_to_nid(unsigned long scn_addr)
return nid;
}
static u64 hot_add_drconf_memory_max(void)
u64 hot_add_drconf_memory_max(void)
{
struct device_node *memory = NULL;
struct device_node *dn = NULL;

@ -2226,6 +2226,10 @@ static struct pmu power_pmu = {
#define PERF_SAMPLE_ADDR_TYPE (PERF_SAMPLE_ADDR | \
PERF_SAMPLE_PHYS_ADDR | \
PERF_SAMPLE_DATA_PAGE_SIZE)
#define SIER_TYPE_SHIFT 15
#define SIER_TYPE_MASK (0x7ull << SIER_TYPE_SHIFT)
/*
* A counter has overflowed; update its count and record
* things if requested. Note that interrupts are hard-disabled
@ -2294,6 +2298,22 @@ static void record_and_restart(struct perf_event *event, unsigned long val,
is_kernel_addr(mfspr(SPRN_SIAR)))
record = 0;
/*
* SIER[46-48] presents instruction type of the sampled instruction.
* In ISA v3.0 and before values "0" and "7" are considered reserved.
* In ISA v3.1, value "7" has been used to indicate "larx/stcx".
* Drop the sample if "type" has reserved values for this field with a
* ISA version check.
*/
if (event->attr.sample_type & PERF_SAMPLE_DATA_SRC &&
ppmu->get_mem_data_src) {
val = (regs->dar & SIER_TYPE_MASK) >> SIER_TYPE_SHIFT;
if (val == 0 || (val == 7 && !cpu_has_feature(CPU_FTR_ARCH_31))) {
record = 0;
atomic64_inc(&event->lost_samples);
}
}
/*
* Finally record data if requested.
*/

@ -319,10 +319,18 @@ void isa207_get_mem_data_src(union perf_mem_data_src *dsrc, u32 flags,
return;
}
sier = mfspr(SPRN_SIER);
/*
* Use regs-dar for SPRN_SIER which is saved
* during perf_read_regs at the beginning
* of the PMU interrupt handler to avoid multiple
* reads of SPRN_SIER
*/
sier = regs->dar;
val = (sier & ISA207_SIER_TYPE_MASK) >> ISA207_SIER_TYPE_SHIFT;
if (val != 1 && val != 2 && !(val == 7 && cpu_has_feature(CPU_FTR_ARCH_31)))
if (val != 1 && val != 2 && !(val == 7 && cpu_has_feature(CPU_FTR_ARCH_31))) {
dsrc->val = 0;
return;
}
idx = (sier & ISA207_SIER_LDST_MASK) >> ISA207_SIER_LDST_SHIFT;
sub_idx = (sier & ISA207_SIER_DATA_SRC_MASK) >> ISA207_SIER_DATA_SRC_SHIFT;
@ -338,8 +346,12 @@ void isa207_get_mem_data_src(union perf_mem_data_src *dsrc, u32 flags,
* to determine the exact instruction type. If the sampling
* criteria is neither load or store, set the type as default
* to NA.
*
* Use regs->dsisr for MMCRA which is saved during perf_read_regs
* at the beginning of the PMU interrupt handler to avoid
* multiple reads of SPRN_MMCRA
*/
mmcra = mfspr(SPRN_MMCRA);
mmcra = regs->dsisr;
op_type = (mmcra >> MMCRA_SAMP_ELIG_SHIFT) & MMCRA_SAMP_ELIG_MASK;
switch (op_type) {

@ -156,6 +156,7 @@ static void vpa_pmu_del(struct perf_event *event, int flags)
}
static struct pmu vpa_pmu = {
.module = THIS_MODULE,
.task_ctx_nr = perf_sw_context,
.name = "vpa_pmu",
.event_init = vpa_pmu_event_init,

@ -37,7 +37,7 @@
#define UIC_VR 0x7
#define UIC_VCR 0x8
struct uic *primary_uic;
static struct uic *primary_uic;
struct uic {
int index;

@ -70,10 +70,6 @@ config PPC_DT_CPU_FTRS
firmware provides this binding.
If you're not sure say Y.
config UDBG_RTAS_CONSOLE
bool "RTAS based debug console"
depends on PPC_RTAS
config PPC_SMP_MUXED_IPI
bool
help
@ -186,12 +182,6 @@ config PPC_INDIRECT_PIO
bool
select GENERIC_IOMAP
config PPC_INDIRECT_MMIO
bool
config PPC_IO_WORKAROUNDS
bool
source "drivers/cpufreq/Kconfig"
menu "CPUIdle driver"

@ -449,6 +449,19 @@ config PPC_RADIX_MMU_DEFAULT
If you're unsure, say Y.
config PPC_RADIX_BROADCAST_TLBIE
bool
depends on PPC_RADIX_MMU
help
Power ISA v3.0 and later implementations in the Linux Compliancy Subset
and lower are not required to implement broadcast TLBIE instructions.
Platforms with CPUs that do implement TLBIE broadcast, that is, where
a TLB invalidation instruction performed on one CPU operates on the
TLBs of all CPUs in the system, should select this option. If this
option is selected, the disable_tlbie kernel command line option can
be used to cause global TLB invalidations to be done via IPIs; without
it, IPIs will be used unconditionally.
config PPC_KERNEL_PREFIXED
depends on PPC_HAVE_PREFIXED_SUPPORT
depends on CC_HAS_PREFIXED

@ -3,42 +3,6 @@ config PPC_CELL
select PPC_64S_HASH_MMU if PPC64
bool
config PPC_CELL_COMMON
bool
select PPC_CELL
select PPC_DCR_MMIO
select PPC_INDIRECT_PIO
select PPC_INDIRECT_MMIO
select PPC_HASH_MMU_NATIVE
select PPC_RTAS
select IRQ_EDGE_EOI_HANDLER
config PPC_CELL_NATIVE
bool
select PPC_CELL_COMMON
select MPIC
select PPC_IO_WORKAROUNDS
select IBM_EMAC_EMAC4 if IBM_EMAC
select IBM_EMAC_RGMII if IBM_EMAC
select IBM_EMAC_ZMII if IBM_EMAC #test only
select IBM_EMAC_TAH if IBM_EMAC #test only
config PPC_IBM_CELL_BLADE
bool "IBM Cell Blade"
depends on PPC64 && PPC_BOOK3S && CPU_BIG_ENDIAN
select PPC_CELL_NATIVE
select PPC_OF_PLATFORM_PCI
select FORCE_PCI
select MMIO_NVRAM
select PPC_UDBG_16550
select UDBG_RTAS_CONSOLE
config AXON_MSI
bool
depends on PPC_IBM_CELL_BLADE && PCI_MSI
select IRQ_DOMAIN_NOMAP
default y
menu "Cell Broadband Engine options"
depends on PPC_CELL
@ -57,48 +21,4 @@ config SPU_BASE
bool
select PPC_COPRO_BASE
config CBE_RAS
bool "RAS features for bare metal Cell BE"
depends on PPC_CELL_NATIVE
default y
config PPC_IBM_CELL_RESETBUTTON
bool "IBM Cell Blade Pinhole reset button"
depends on CBE_RAS && PPC_IBM_CELL_BLADE
default y
help
Support Pinhole Resetbutton on IBM Cell blades.
This adds a method to trigger system reset via front panel pinhole button.
config PPC_IBM_CELL_POWERBUTTON
tristate "IBM Cell Blade power button"
depends on PPC_IBM_CELL_BLADE && INPUT_EVDEV
default y
help
Support Powerbutton on IBM Cell blades.
This will enable the powerbutton as an input device.
config CBE_THERM
tristate "CBE thermal support"
default m
depends on CBE_RAS && SPU_BASE
config PPC_PMI
tristate
default y
depends on CPU_FREQ_CBE_PMI || PPC_IBM_CELL_POWERBUTTON
help
PMI (Platform Management Interrupt) is a way to
communicate with the BMC (Baseboard Management Controller).
It is used in some IBM Cell blades.
config CBE_CPUFREQ_SPU_GOVERNOR
tristate "CBE frequency scaling based on SPU usage"
depends on SPU_FS && CPU_FREQ
default m
help
This governor checks for spu usage to adjust the cpu frequency.
If no spu is running on a given cpu, that cpu will be throttled to
the minimal possible frequency.
endmenu

@ -1,27 +1,4 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_PPC_CELL_COMMON) += cbe_regs.o interrupt.o pervasive.o
obj-$(CONFIG_PPC_CELL_NATIVE) += iommu.o setup.o spider-pic.o \
pmu.o spider-pci.o
obj-$(CONFIG_CBE_RAS) += ras.o
obj-$(CONFIG_CBE_THERM) += cbe_thermal.o
obj-$(CONFIG_CBE_CPUFREQ_SPU_GOVERNOR) += cpufreq_spudemand.o
obj-$(CONFIG_PPC_IBM_CELL_POWERBUTTON) += cbe_powerbutton.o
ifdef CONFIG_SMP
obj-$(CONFIG_PPC_CELL_NATIVE) += smp.o
endif
# needed only when building loadable spufs.ko
spu-priv1-$(CONFIG_PPC_CELL_COMMON) += spu_priv1_mmio.o
spu-manage-$(CONFIG_PPC_CELL_COMMON) += spu_manage.o
obj-$(CONFIG_SPU_BASE) += spu_callbacks.o spu_base.o \
spu_syscalls.o \
$(spu-priv1-y) \
$(spu-manage-y) \
spufs/
obj-$(CONFIG_AXON_MSI) += axon_msi.o

@ -1,481 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Copyright 2007, Michael Ellerman, IBM Corporation.
*/
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/kernel.h>
#include <linux/pci.h>
#include <linux/msi.h>
#include <linux/export.h>
#include <linux/slab.h>
#include <linux/debugfs.h>
#include <linux/of.h>
#include <linux/of_irq.h>
#include <linux/platform_device.h>
#include <asm/dcr.h>
#include <asm/machdep.h>
#include "cell.h"
/*
* MSIC registers, specified as offsets from dcr_base
*/
#define MSIC_CTRL_REG 0x0
/* Base Address registers specify FIFO location in BE memory */
#define MSIC_BASE_ADDR_HI_REG 0x3
#define MSIC_BASE_ADDR_LO_REG 0x4
/* Hold the read/write offsets into the FIFO */
#define MSIC_READ_OFFSET_REG 0x5
#define MSIC_WRITE_OFFSET_REG 0x6
/* MSIC control register flags */
#define MSIC_CTRL_ENABLE 0x0001
#define MSIC_CTRL_FIFO_FULL_ENABLE 0x0002
#define MSIC_CTRL_IRQ_ENABLE 0x0008
#define MSIC_CTRL_FULL_STOP_ENABLE 0x0010
/*
* The MSIC can be configured to use a FIFO of 32KB, 64KB, 128KB or 256KB.
* Currently we're using a 64KB FIFO size.
*/
#define MSIC_FIFO_SIZE_SHIFT 16
#define MSIC_FIFO_SIZE_BYTES (1 << MSIC_FIFO_SIZE_SHIFT)
/*
* To configure the FIFO size as (1 << n) bytes, we write (n - 15) into bits
* 8-9 of the MSIC control reg.
*/
#define MSIC_CTRL_FIFO_SIZE (((MSIC_FIFO_SIZE_SHIFT - 15) << 8) & 0x300)
/*
* We need to mask the read/write offsets to make sure they stay within
* the bounds of the FIFO. Also they should always be 16-byte aligned.
*/
#define MSIC_FIFO_SIZE_MASK ((MSIC_FIFO_SIZE_BYTES - 1) & ~0xFu)
/* Each entry in the FIFO is 16 bytes, the first 4 bytes hold the irq # */
#define MSIC_FIFO_ENTRY_SIZE 0x10
struct axon_msic {
struct irq_domain *irq_domain;
__le32 *fifo_virt;
dma_addr_t fifo_phys;
dcr_host_t dcr_host;
u32 read_offset;
#ifdef DEBUG
u32 __iomem *trigger;
#endif
};
#ifdef DEBUG
void axon_msi_debug_setup(struct device_node *dn, struct axon_msic *msic);
#else
static inline void axon_msi_debug_setup(struct device_node *dn,
struct axon_msic *msic) { }
#endif
static void msic_dcr_write(struct axon_msic *msic, unsigned int dcr_n, u32 val)
{
pr_devel("axon_msi: dcr_write(0x%x, 0x%x)\n", val, dcr_n);
dcr_write(msic->dcr_host, dcr_n, val);
}
static void axon_msi_cascade(struct irq_desc *desc)
{
struct irq_chip *chip = irq_desc_get_chip(desc);
struct axon_msic *msic = irq_desc_get_handler_data(desc);
u32 write_offset, msi;
int idx;
int retry = 0;
write_offset = dcr_read(msic->dcr_host, MSIC_WRITE_OFFSET_REG);
pr_devel("axon_msi: original write_offset 0x%x\n", write_offset);
/* write_offset doesn't wrap properly, so we have to mask it */
write_offset &= MSIC_FIFO_SIZE_MASK;
while (msic->read_offset != write_offset && retry < 100) {
idx = msic->read_offset / sizeof(__le32);
msi = le32_to_cpu(msic->fifo_virt[idx]);
msi &= 0xFFFF;
pr_devel("axon_msi: woff %x roff %x msi %x\n",
write_offset, msic->read_offset, msi);
if (msi < irq_get_nr_irqs() && irq_get_chip_data(msi) == msic) {
generic_handle_irq(msi);
msic->fifo_virt[idx] = cpu_to_le32(0xffffffff);
} else {
/*
* Reading the MSIC_WRITE_OFFSET_REG does not
* reliably flush the outstanding DMA to the
* FIFO buffer. Here we were reading stale
* data, so we need to retry.
*/
udelay(1);
retry++;
pr_devel("axon_msi: invalid irq 0x%x!\n", msi);
continue;
}
if (retry) {
pr_devel("axon_msi: late irq 0x%x, retry %d\n",
msi, retry);
retry = 0;
}
msic->read_offset += MSIC_FIFO_ENTRY_SIZE;
msic->read_offset &= MSIC_FIFO_SIZE_MASK;
}
if (retry) {
printk(KERN_WARNING "axon_msi: irq timed out\n");
msic->read_offset += MSIC_FIFO_ENTRY_SIZE;
msic->read_offset &= MSIC_FIFO_SIZE_MASK;
}
chip->irq_eoi(&desc->irq_data);
}
static struct axon_msic *find_msi_translator(struct pci_dev *dev)
{
struct irq_domain *irq_domain;
struct device_node *dn, *tmp;
const phandle *ph;
struct axon_msic *msic = NULL;
dn = of_node_get(pci_device_to_OF_node(dev));
if (!dn) {
dev_dbg(&dev->dev, "axon_msi: no pci_dn found\n");
return NULL;
}
for (; dn; dn = of_get_next_parent(dn)) {
ph = of_get_property(dn, "msi-translator", NULL);
if (ph)
break;
}
if (!ph) {
dev_dbg(&dev->dev,
"axon_msi: no msi-translator property found\n");
goto out_error;
}
tmp = dn;
dn = of_find_node_by_phandle(*ph);
of_node_put(tmp);
if (!dn) {
dev_dbg(&dev->dev,
"axon_msi: msi-translator doesn't point to a node\n");
goto out_error;
}
irq_domain = irq_find_host(dn);
if (!irq_domain) {
dev_dbg(&dev->dev, "axon_msi: no irq_domain found for node %pOF\n",
dn);
goto out_error;
}
msic = irq_domain->host_data;
out_error:
of_node_put(dn);
return msic;
}
static int setup_msi_msg_address(struct pci_dev *dev, struct msi_msg *msg)
{
struct device_node *dn;
int len;
const u32 *prop;
dn = of_node_get(pci_device_to_OF_node(dev));
if (!dn) {
dev_dbg(&dev->dev, "axon_msi: no pci_dn found\n");
return -ENODEV;
}
for (; dn; dn = of_get_next_parent(dn)) {
if (!dev->no_64bit_msi) {
prop = of_get_property(dn, "msi-address-64", &len);
if (prop)
break;
}
prop = of_get_property(dn, "msi-address-32", &len);
if (prop)
break;
}
if (!prop) {
dev_dbg(&dev->dev,
"axon_msi: no msi-address-(32|64) properties found\n");
of_node_put(dn);
return -ENOENT;
}
switch (len) {
case 8:
msg->address_hi = prop[0];
msg->address_lo = prop[1];
break;
case 4:
msg->address_hi = 0;
msg->address_lo = prop[0];
break;
default:
dev_dbg(&dev->dev,
"axon_msi: malformed msi-address-(32|64) property\n");
of_node_put(dn);
return -EINVAL;
}
of_node_put(dn);
return 0;
}
static int axon_msi_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
{
unsigned int virq, rc;
struct msi_desc *entry;
struct msi_msg msg;
struct axon_msic *msic;
msic = find_msi_translator(dev);
if (!msic)
return -ENODEV;
rc = setup_msi_msg_address(dev, &msg);
if (rc)
return rc;
msi_for_each_desc(entry, &dev->dev, MSI_DESC_NOTASSOCIATED) {
virq = irq_create_direct_mapping(msic->irq_domain);
if (!virq) {
dev_warn(&dev->dev,
"axon_msi: virq allocation failed!\n");
return -1;
}
dev_dbg(&dev->dev, "axon_msi: allocated virq 0x%x\n", virq);
irq_set_msi_desc(virq, entry);
msg.data = virq;
pci_write_msi_msg(virq, &msg);
}
return 0;
}
static void axon_msi_teardown_msi_irqs(struct pci_dev *dev)
{
struct msi_desc *entry;
dev_dbg(&dev->dev, "axon_msi: tearing down msi irqs\n");
msi_for_each_desc(entry, &dev->dev, MSI_DESC_ASSOCIATED) {
irq_set_msi_desc(entry->irq, NULL);
irq_dispose_mapping(entry->irq);
entry->irq = 0;
}
}
static struct irq_chip msic_irq_chip = {
.irq_mask = pci_msi_mask_irq,
.irq_unmask = pci_msi_unmask_irq,
.irq_shutdown = pci_msi_mask_irq,
.name = "AXON-MSI",
};
static int msic_host_map(struct irq_domain *h, unsigned int virq,
irq_hw_number_t hw)
{
irq_set_chip_data(virq, h->host_data);
irq_set_chip_and_handler(virq, &msic_irq_chip, handle_simple_irq);
return 0;
}
static const struct irq_domain_ops msic_host_ops = {
.map = msic_host_map,
};
static void axon_msi_shutdown(struct platform_device *device)
{
struct axon_msic *msic = dev_get_drvdata(&device->dev);
u32 tmp;
pr_devel("axon_msi: disabling %pOF\n",
irq_domain_get_of_node(msic->irq_domain));
tmp = dcr_read(msic->dcr_host, MSIC_CTRL_REG);
tmp &= ~MSIC_CTRL_ENABLE & ~MSIC_CTRL_IRQ_ENABLE;
msic_dcr_write(msic, MSIC_CTRL_REG, tmp);
}
static int axon_msi_probe(struct platform_device *device)
{
struct device_node *dn = device->dev.of_node;
struct axon_msic *msic;
unsigned int virq;
int dcr_base, dcr_len;
pr_devel("axon_msi: setting up dn %pOF\n", dn);
msic = kzalloc(sizeof(*msic), GFP_KERNEL);
if (!msic) {
printk(KERN_ERR "axon_msi: couldn't allocate msic for %pOF\n",
dn);
goto out;
}
dcr_base = dcr_resource_start(dn, 0);
dcr_len = dcr_resource_len(dn, 0);
if (dcr_base == 0 || dcr_len == 0) {
printk(KERN_ERR
"axon_msi: couldn't parse dcr properties on %pOF\n",
dn);
goto out_free_msic;
}
msic->dcr_host = dcr_map(dn, dcr_base, dcr_len);
if (!DCR_MAP_OK(msic->dcr_host)) {
printk(KERN_ERR "axon_msi: dcr_map failed for %pOF\n",
dn);
goto out_free_msic;
}
msic->fifo_virt = dma_alloc_coherent(&device->dev, MSIC_FIFO_SIZE_BYTES,
&msic->fifo_phys, GFP_KERNEL);
if (!msic->fifo_virt) {
printk(KERN_ERR "axon_msi: couldn't allocate fifo for %pOF\n",
dn);
goto out_free_msic;
}
virq = irq_of_parse_and_map(dn, 0);
if (!virq) {
printk(KERN_ERR "axon_msi: irq parse and map failed for %pOF\n",
dn);
goto out_free_fifo;
}
memset(msic->fifo_virt, 0xff, MSIC_FIFO_SIZE_BYTES);
/* We rely on being able to stash a virq in a u16, so limit irqs to < 65536 */
msic->irq_domain = irq_domain_add_nomap(dn, 65536, &msic_host_ops, msic);
if (!msic->irq_domain) {
printk(KERN_ERR "axon_msi: couldn't allocate irq_domain for %pOF\n",
dn);
goto out_free_fifo;
}
irq_set_handler_data(virq, msic);
irq_set_chained_handler(virq, axon_msi_cascade);
pr_devel("axon_msi: irq 0x%x setup for axon_msi\n", virq);
/* Enable the MSIC hardware */
msic_dcr_write(msic, MSIC_BASE_ADDR_HI_REG, msic->fifo_phys >> 32);
msic_dcr_write(msic, MSIC_BASE_ADDR_LO_REG,
msic->fifo_phys & 0xFFFFFFFF);
msic_dcr_write(msic, MSIC_CTRL_REG,
MSIC_CTRL_IRQ_ENABLE | MSIC_CTRL_ENABLE |
MSIC_CTRL_FIFO_SIZE);
msic->read_offset = dcr_read(msic->dcr_host, MSIC_WRITE_OFFSET_REG)
& MSIC_FIFO_SIZE_MASK;
dev_set_drvdata(&device->dev, msic);
cell_pci_controller_ops.setup_msi_irqs = axon_msi_setup_msi_irqs;
cell_pci_controller_ops.teardown_msi_irqs = axon_msi_teardown_msi_irqs;
axon_msi_debug_setup(dn, msic);
printk(KERN_DEBUG "axon_msi: setup MSIC on %pOF\n", dn);
return 0;
out_free_fifo:
dma_free_coherent(&device->dev, MSIC_FIFO_SIZE_BYTES, msic->fifo_virt,
msic->fifo_phys);
out_free_msic:
kfree(msic);
out:
return -1;
}
static const struct of_device_id axon_msi_device_id[] = {
{
.compatible = "ibm,axon-msic"
},
{}
};
static struct platform_driver axon_msi_driver = {
.probe = axon_msi_probe,
.shutdown = axon_msi_shutdown,
.driver = {
.name = "axon-msi",
.of_match_table = axon_msi_device_id,
},
};
static int __init axon_msi_init(void)
{
return platform_driver_register(&axon_msi_driver);
}
subsys_initcall(axon_msi_init);
#ifdef DEBUG
static int msic_set(void *data, u64 val)
{
struct axon_msic *msic = data;
out_le32(msic->trigger, val);
return 0;
}
static int msic_get(void *data, u64 *val)
{
*val = 0;
return 0;
}
DEFINE_SIMPLE_ATTRIBUTE(fops_msic, msic_get, msic_set, "%llu\n");
void axon_msi_debug_setup(struct device_node *dn, struct axon_msic *msic)
{
char name[8];
struct resource res;
if (of_address_to_resource(dn, 0, &res)) {
pr_devel("axon_msi: couldn't get reg property\n");
return;
}
msic->trigger = ioremap(res.start, 0x4);
if (!msic->trigger) {
pr_devel("axon_msi: ioremap failed\n");
return;
}
snprintf(name, sizeof(name), "msic_%d", of_node_to_nid(dn));
debugfs_create_file(name, 0600, arch_debugfs_dir, msic, &fops_msic);
}
#endif /* DEBUG */

@ -1,106 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* driver for powerbutton on IBM cell blades
*
* (C) Copyright IBM Corp. 2005-2008
*
* Author: Christian Krafft <krafft@de.ibm.com>
*/
#include <linux/input.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/platform_device.h>
#include <asm/pmi.h>
static struct input_dev *button_dev;
static struct platform_device *button_pdev;
static void cbe_powerbutton_handle_pmi(pmi_message_t pmi_msg)
{
BUG_ON(pmi_msg.type != PMI_TYPE_POWER_BUTTON);
input_report_key(button_dev, KEY_POWER, 1);
input_sync(button_dev);
input_report_key(button_dev, KEY_POWER, 0);
input_sync(button_dev);
}
static struct pmi_handler cbe_pmi_handler = {
.type = PMI_TYPE_POWER_BUTTON,
.handle_pmi_message = cbe_powerbutton_handle_pmi,
};
static int __init cbe_powerbutton_init(void)
{
int ret = 0;
struct input_dev *dev;
if (!of_machine_is_compatible("IBM,CBPLUS-1.0")) {
printk(KERN_ERR "%s: Not a cell blade.\n", __func__);
ret = -ENODEV;
goto out;
}
dev = input_allocate_device();
if (!dev) {
ret = -ENOMEM;
printk(KERN_ERR "%s: Not enough memory.\n", __func__);
goto out;
}
set_bit(EV_KEY, dev->evbit);
set_bit(KEY_POWER, dev->keybit);
dev->name = "Power Button";
dev->id.bustype = BUS_HOST;
/* this makes the button look like an acpi power button
* no clue whether anyone relies on that though */
dev->id.product = 0x02;
dev->phys = "LNXPWRBN/button/input0";
button_pdev = platform_device_register_simple("power_button", 0, NULL, 0);
if (IS_ERR(button_pdev)) {
ret = PTR_ERR(button_pdev);
goto out_free_input;
}
dev->dev.parent = &button_pdev->dev;
ret = input_register_device(dev);
if (ret) {
printk(KERN_ERR "%s: Failed to register device\n", __func__);
goto out_free_pdev;
}
button_dev = dev;
ret = pmi_register_handler(&cbe_pmi_handler);
if (ret) {
printk(KERN_ERR "%s: Failed to register with pmi.\n", __func__);
goto out_free_pdev;
}
goto out;
out_free_pdev:
platform_device_unregister(button_pdev);
out_free_input:
input_free_device(dev);
out:
return ret;
}
static void __exit cbe_powerbutton_exit(void)
{
pmi_unregister_handler(&cbe_pmi_handler);
platform_device_unregister(button_pdev);
input_free_device(button_dev);
}
module_init(cbe_powerbutton_init);
module_exit(cbe_powerbutton_exit);
MODULE_DESCRIPTION("Driver for powerbutton on IBM cell blades");
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Christian Krafft <krafft@de.ibm.com>");

@ -1,298 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* cbe_regs.c
*
* Accessor routines for the various MMIO register blocks of the CBE
*
* (c) 2006 Benjamin Herrenschmidt <benh@kernel.crashing.org>, IBM Corp.
*/
#include <linux/percpu.h>
#include <linux/types.h>
#include <linux/export.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/pgtable.h>
#include <asm/io.h>
#include <asm/ptrace.h>
#include <asm/cell-regs.h>
/*
* Current implementation uses "cpu" nodes. We build our own mapping
* array of cpu numbers to cpu nodes locally for now to allow interrupt
* time code to have a fast path rather than call of_get_cpu_node(). If
* we implement cpu hotplug, we'll have to install an appropriate notifier
* in order to release references to the cpu going away
*/
static struct cbe_regs_map
{
struct device_node *cpu_node;
struct device_node *be_node;
struct cbe_pmd_regs __iomem *pmd_regs;
struct cbe_iic_regs __iomem *iic_regs;
struct cbe_mic_tm_regs __iomem *mic_tm_regs;
struct cbe_pmd_shadow_regs pmd_shadow_regs;
} cbe_regs_maps[MAX_CBE];
static int cbe_regs_map_count;
static struct cbe_thread_map
{
struct device_node *cpu_node;
struct device_node *be_node;
struct cbe_regs_map *regs;
unsigned int thread_id;
unsigned int cbe_id;
} cbe_thread_map[NR_CPUS];
static cpumask_t cbe_local_mask[MAX_CBE] = { [0 ... MAX_CBE-1] = {CPU_BITS_NONE} };
static cpumask_t cbe_first_online_cpu = { CPU_BITS_NONE };
static struct cbe_regs_map *cbe_find_map(struct device_node *np)
{
int i;
struct device_node *tmp_np;
if (!of_node_is_type(np, "spe")) {
for (i = 0; i < cbe_regs_map_count; i++)
if (cbe_regs_maps[i].cpu_node == np ||
cbe_regs_maps[i].be_node == np)
return &cbe_regs_maps[i];
return NULL;
}
if (np->data)
return np->data;
/* walk up path until cpu or be node was found */
tmp_np = np;
do {
tmp_np = tmp_np->parent;
/* on a correct devicetree we wont get up to root */
BUG_ON(!tmp_np);
} while (!of_node_is_type(tmp_np, "cpu") ||
!of_node_is_type(tmp_np, "be"));
np->data = cbe_find_map(tmp_np);
return np->data;
}
struct cbe_pmd_regs __iomem *cbe_get_pmd_regs(struct device_node *np)
{
struct cbe_regs_map *map = cbe_find_map(np);
if (map == NULL)
return NULL;
return map->pmd_regs;
}
EXPORT_SYMBOL_GPL(cbe_get_pmd_regs);
struct cbe_pmd_regs __iomem *cbe_get_cpu_pmd_regs(int cpu)
{
struct cbe_regs_map *map = cbe_thread_map[cpu].regs;
if (map == NULL)
return NULL;
return map->pmd_regs;
}
EXPORT_SYMBOL_GPL(cbe_get_cpu_pmd_regs);
struct cbe_pmd_shadow_regs *cbe_get_pmd_shadow_regs(struct device_node *np)
{
struct cbe_regs_map *map = cbe_find_map(np);
if (map == NULL)
return NULL;
return &map->pmd_shadow_regs;
}
struct cbe_pmd_shadow_regs *cbe_get_cpu_pmd_shadow_regs(int cpu)
{
struct cbe_regs_map *map = cbe_thread_map[cpu].regs;
if (map == NULL)
return NULL;
return &map->pmd_shadow_regs;
}
struct cbe_iic_regs __iomem *cbe_get_iic_regs(struct device_node *np)
{
struct cbe_regs_map *map = cbe_find_map(np);
if (map == NULL)
return NULL;
return map->iic_regs;
}
struct cbe_iic_regs __iomem *cbe_get_cpu_iic_regs(int cpu)
{
struct cbe_regs_map *map = cbe_thread_map[cpu].regs;
if (map == NULL)
return NULL;
return map->iic_regs;
}
struct cbe_mic_tm_regs __iomem *cbe_get_mic_tm_regs(struct device_node *np)
{
struct cbe_regs_map *map = cbe_find_map(np);
if (map == NULL)
return NULL;
return map->mic_tm_regs;
}
struct cbe_mic_tm_regs __iomem *cbe_get_cpu_mic_tm_regs(int cpu)
{
struct cbe_regs_map *map = cbe_thread_map[cpu].regs;
if (map == NULL)
return NULL;
return map->mic_tm_regs;
}
EXPORT_SYMBOL_GPL(cbe_get_cpu_mic_tm_regs);
u32 cbe_get_hw_thread_id(int cpu)
{
return cbe_thread_map[cpu].thread_id;
}
EXPORT_SYMBOL_GPL(cbe_get_hw_thread_id);
u32 cbe_cpu_to_node(int cpu)
{
return cbe_thread_map[cpu].cbe_id;
}
EXPORT_SYMBOL_GPL(cbe_cpu_to_node);
u32 cbe_node_to_cpu(int node)
{
return cpumask_first(&cbe_local_mask[node]);
}
EXPORT_SYMBOL_GPL(cbe_node_to_cpu);
static struct device_node *__init cbe_get_be_node(int cpu_id)
{
struct device_node *np;
for_each_node_by_type (np, "be") {
int len,i;
const phandle *cpu_handle;
cpu_handle = of_get_property(np, "cpus", &len);
/*
* the CAB SLOF tree is non compliant, so we just assume
* there is only one node
*/
if (WARN_ON_ONCE(!cpu_handle))
return np;
for (i = 0; i < len; i++) {
struct device_node *ch_np = of_find_node_by_phandle(cpu_handle[i]);
struct device_node *ci_np = of_get_cpu_node(cpu_id, NULL);
of_node_put(ch_np);
of_node_put(ci_np);
if (ch_np == ci_np)
return np;
}
}
return NULL;
}
static void __init cbe_fill_regs_map(struct cbe_regs_map *map)
{
if(map->be_node) {
struct device_node *be, *np, *parent_np;
be = map->be_node;
for_each_node_by_type(np, "pervasive") {
parent_np = of_get_parent(np);
if (parent_np == be)
map->pmd_regs = of_iomap(np, 0);
of_node_put(parent_np);
}
for_each_node_by_type(np, "CBEA-Internal-Interrupt-Controller") {
parent_np = of_get_parent(np);
if (parent_np == be)
map->iic_regs = of_iomap(np, 2);
of_node_put(parent_np);
}
for_each_node_by_type(np, "mic-tm") {
parent_np = of_get_parent(np);
if (parent_np == be)
map->mic_tm_regs = of_iomap(np, 0);
of_node_put(parent_np);
}
} else {
struct device_node *cpu;
/* That hack must die die die ! */
const struct address_prop {
unsigned long address;
unsigned int len;
} __attribute__((packed)) *prop;
cpu = map->cpu_node;
prop = of_get_property(cpu, "pervasive", NULL);
if (prop != NULL)
map->pmd_regs = ioremap(prop->address, prop->len);
prop = of_get_property(cpu, "iic", NULL);
if (prop != NULL)
map->iic_regs = ioremap(prop->address, prop->len);
prop = of_get_property(cpu, "mic-tm", NULL);
if (prop != NULL)
map->mic_tm_regs = ioremap(prop->address, prop->len);
}
}
void __init cbe_regs_init(void)
{
int i;
unsigned int thread_id;
struct device_node *cpu;
/* Build local fast map of CPUs */
for_each_possible_cpu(i) {
cbe_thread_map[i].cpu_node = of_get_cpu_node(i, &thread_id);
cbe_thread_map[i].be_node = cbe_get_be_node(i);
cbe_thread_map[i].thread_id = thread_id;
}
/* Find maps for each device tree CPU */
for_each_node_by_type(cpu, "cpu") {
struct cbe_regs_map *map;
unsigned int cbe_id;
cbe_id = cbe_regs_map_count++;
map = &cbe_regs_maps[cbe_id];
if (cbe_regs_map_count > MAX_CBE) {
printk(KERN_ERR "cbe_regs: More BE chips than supported"
"!\n");
cbe_regs_map_count--;
of_node_put(cpu);
return;
}
of_node_put(map->cpu_node);
map->cpu_node = of_node_get(cpu);
for_each_possible_cpu(i) {
struct cbe_thread_map *thread = &cbe_thread_map[i];
if (thread->cpu_node == cpu) {
thread->regs = map;
thread->cbe_id = cbe_id;
map->be_node = thread->be_node;
cpumask_set_cpu(i, &cbe_local_mask[cbe_id]);
if(thread->thread_id == 0)
cpumask_set_cpu(i, &cbe_first_online_cpu);
}
}
cbe_fill_regs_map(map);
}
}

@ -1,387 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* thermal support for the cell processor
*
* This module adds some sysfs attributes to cpu and spu nodes.
* Base for measurements are the digital thermal sensors (DTS)
* located on the chip.
* The accuracy is 2 degrees, starting from 65 up to 125 degrees celsius
* The attributes can be found under
* /sys/devices/system/cpu/cpuX/thermal
* /sys/devices/system/spu/spuX/thermal
*
* The following attributes are added for each node:
* temperature:
* contains the current temperature measured by the DTS
* throttle_begin:
* throttling begins when temperature is greater or equal to
* throttle_begin. Setting this value to 125 prevents throttling.
* throttle_end:
* throttling is being ceased, if the temperature is lower than
* throttle_end. Due to a delay between applying throttling and
* a reduced temperature this value should be less than throttle_begin.
* A value equal to throttle_begin provides only a very little hysteresis.
* throttle_full_stop:
* If the temperatrue is greater or equal to throttle_full_stop,
* full throttling is applied to the cpu or spu. This value should be
* greater than throttle_begin and throttle_end. Setting this value to
* 65 prevents the unit from running code at all.
*
* (C) Copyright IBM Deutschland Entwicklung GmbH 2005
*
* Author: Christian Krafft <krafft@de.ibm.com>
*/
#include <linux/module.h>
#include <linux/device.h>
#include <linux/kernel.h>
#include <linux/cpu.h>
#include <linux/stringify.h>
#include <asm/spu.h>
#include <asm/io.h>
#include <asm/cell-regs.h>
#include "spu_priv1_mmio.h"
#define TEMP_MIN 65
#define TEMP_MAX 125
#define DEVICE_PREFIX_ATTR(_prefix,_name,_mode) \
struct device_attribute attr_ ## _prefix ## _ ## _name = { \
.attr = { .name = __stringify(_name), .mode = _mode }, \
.show = _prefix ## _show_ ## _name, \
.store = _prefix ## _store_ ## _name, \
};
static inline u8 reg_to_temp(u8 reg_value)
{
return ((reg_value & 0x3f) << 1) + TEMP_MIN;
}
static inline u8 temp_to_reg(u8 temp)
{
return ((temp - TEMP_MIN) >> 1) & 0x3f;
}
static struct cbe_pmd_regs __iomem *get_pmd_regs(struct device *dev)
{
struct spu *spu;
spu = container_of(dev, struct spu, dev);
return cbe_get_pmd_regs(spu_devnode(spu));
}
/* returns the value for a given spu in a given register */
static u8 spu_read_register_value(struct device *dev, union spe_reg __iomem *reg)
{
union spe_reg value;
struct spu *spu;
spu = container_of(dev, struct spu, dev);
value.val = in_be64(&reg->val);
return value.spe[spu->spe_id];
}
static ssize_t spu_show_temp(struct device *dev, struct device_attribute *attr,
char *buf)
{
u8 value;
struct cbe_pmd_regs __iomem *pmd_regs;
pmd_regs = get_pmd_regs(dev);
value = spu_read_register_value(dev, &pmd_regs->ts_ctsr1);
return sprintf(buf, "%d\n", reg_to_temp(value));
}
static ssize_t show_throttle(struct cbe_pmd_regs __iomem *pmd_regs, char *buf, int pos)
{
u64 value;
value = in_be64(&pmd_regs->tm_tpr.val);
/* access the corresponding byte */
value >>= pos;
value &= 0x3F;
return sprintf(buf, "%d\n", reg_to_temp(value));
}
static ssize_t store_throttle(struct cbe_pmd_regs __iomem *pmd_regs, const char *buf, size_t size, int pos)
{
u64 reg_value;
unsigned int temp;
u64 new_value;
int ret;
ret = sscanf(buf, "%u", &temp);
if (ret != 1 || temp < TEMP_MIN || temp > TEMP_MAX)
return -EINVAL;
new_value = temp_to_reg(temp);
reg_value = in_be64(&pmd_regs->tm_tpr.val);
/* zero out bits for new value */
reg_value &= ~(0xffull << pos);
/* set bits to new value */
reg_value |= new_value << pos;
out_be64(&pmd_regs->tm_tpr.val, reg_value);
return size;
}
static ssize_t spu_show_throttle_end(struct device *dev,
struct device_attribute *attr, char *buf)
{
return show_throttle(get_pmd_regs(dev), buf, 0);
}
static ssize_t spu_show_throttle_begin(struct device *dev,
struct device_attribute *attr, char *buf)
{
return show_throttle(get_pmd_regs(dev), buf, 8);
}
static ssize_t spu_show_throttle_full_stop(struct device *dev,
struct device_attribute *attr, char *buf)
{
return show_throttle(get_pmd_regs(dev), buf, 16);
}
static ssize_t spu_store_throttle_end(struct device *dev,
struct device_attribute *attr, const char *buf, size_t size)
{
return store_throttle(get_pmd_regs(dev), buf, size, 0);
}
static ssize_t spu_store_throttle_begin(struct device *dev,
struct device_attribute *attr, const char *buf, size_t size)
{
return store_throttle(get_pmd_regs(dev), buf, size, 8);
}
static ssize_t spu_store_throttle_full_stop(struct device *dev,
struct device_attribute *attr, const char *buf, size_t size)
{
return store_throttle(get_pmd_regs(dev), buf, size, 16);
}
static ssize_t ppe_show_temp(struct device *dev, char *buf, int pos)
{
struct cbe_pmd_regs __iomem *pmd_regs;
u64 value;
pmd_regs = cbe_get_cpu_pmd_regs(dev->id);
value = in_be64(&pmd_regs->ts_ctsr2);
value = (value >> pos) & 0x3f;
return sprintf(buf, "%d\n", reg_to_temp(value));
}
/* shows the temperature of the DTS on the PPE,
* located near the linear thermal sensor */
static ssize_t ppe_show_temp0(struct device *dev,
struct device_attribute *attr, char *buf)
{
return ppe_show_temp(dev, buf, 32);
}
/* shows the temperature of the second DTS on the PPE */
static ssize_t ppe_show_temp1(struct device *dev,
struct device_attribute *attr, char *buf)
{
return ppe_show_temp(dev, buf, 0);
}
static ssize_t ppe_show_throttle_end(struct device *dev,
struct device_attribute *attr, char *buf)
{
return show_throttle(cbe_get_cpu_pmd_regs(dev->id), buf, 32);
}
static ssize_t ppe_show_throttle_begin(struct device *dev,
struct device_attribute *attr, char *buf)
{
return show_throttle(cbe_get_cpu_pmd_regs(dev->id), buf, 40);
}
static ssize_t ppe_show_throttle_full_stop(struct device *dev,
struct device_attribute *attr, char *buf)
{
return show_throttle(cbe_get_cpu_pmd_regs(dev->id), buf, 48);
}
static ssize_t ppe_store_throttle_end(struct device *dev,
struct device_attribute *attr, const char *buf, size_t size)
{
return store_throttle(cbe_get_cpu_pmd_regs(dev->id), buf, size, 32);
}
static ssize_t ppe_store_throttle_begin(struct device *dev,
struct device_attribute *attr, const char *buf, size_t size)
{
return store_throttle(cbe_get_cpu_pmd_regs(dev->id), buf, size, 40);
}
static ssize_t ppe_store_throttle_full_stop(struct device *dev,
struct device_attribute *attr, const char *buf, size_t size)
{
return store_throttle(cbe_get_cpu_pmd_regs(dev->id), buf, size, 48);
}
static struct device_attribute attr_spu_temperature = {
.attr = {.name = "temperature", .mode = 0400 },
.show = spu_show_temp,
};
static DEVICE_PREFIX_ATTR(spu, throttle_end, 0600);
static DEVICE_PREFIX_ATTR(spu, throttle_begin, 0600);
static DEVICE_PREFIX_ATTR(spu, throttle_full_stop, 0600);
static struct attribute *spu_attributes[] = {
&attr_spu_temperature.attr,
&attr_spu_throttle_end.attr,
&attr_spu_throttle_begin.attr,
&attr_spu_throttle_full_stop.attr,
NULL,
};
static const struct attribute_group spu_attribute_group = {
.name = "thermal",
.attrs = spu_attributes,
};
static struct device_attribute attr_ppe_temperature0 = {
.attr = {.name = "temperature0", .mode = 0400 },
.show = ppe_show_temp0,
};
static struct device_attribute attr_ppe_temperature1 = {
.attr = {.name = "temperature1", .mode = 0400 },
.show = ppe_show_temp1,
};
static DEVICE_PREFIX_ATTR(ppe, throttle_end, 0600);
static DEVICE_PREFIX_ATTR(ppe, throttle_begin, 0600);
static DEVICE_PREFIX_ATTR(ppe, throttle_full_stop, 0600);
static struct attribute *ppe_attributes[] = {
&attr_ppe_temperature0.attr,
&attr_ppe_temperature1.attr,
&attr_ppe_throttle_end.attr,
&attr_ppe_throttle_begin.attr,
&attr_ppe_throttle_full_stop.attr,
NULL,
};
static struct attribute_group ppe_attribute_group = {
.name = "thermal",
.attrs = ppe_attributes,
};
/*
* initialize throttling with default values
*/
static int __init init_default_values(void)
{
int cpu;
struct cbe_pmd_regs __iomem *pmd_regs;
struct device *dev;
union ppe_spe_reg tpr;
union spe_reg str1;
u64 str2;
union spe_reg cr1;
u64 cr2;
/* TPR defaults */
/* ppe
* 1F - no full stop
* 08 - dynamic throttling starts if over 80 degrees
* 03 - dynamic throttling ceases if below 70 degrees */
tpr.ppe = 0x1F0803;
/* spe
* 10 - full stopped when over 96 degrees
* 08 - dynamic throttling starts if over 80 degrees
* 03 - dynamic throttling ceases if below 70 degrees
*/
tpr.spe = 0x100803;
/* STR defaults */
/* str1
* 10 - stop 16 of 32 cycles
*/
str1.val = 0x1010101010101010ull;
/* str2
* 10 - stop 16 of 32 cycles
*/
str2 = 0x10;
/* CR defaults */
/* cr1
* 4 - normal operation
*/
cr1.val = 0x0404040404040404ull;
/* cr2
* 4 - normal operation
*/
cr2 = 0x04;
for_each_possible_cpu (cpu) {
pr_debug("processing cpu %d\n", cpu);
dev = get_cpu_device(cpu);
if (!dev) {
pr_info("invalid dev pointer for cbe_thermal\n");
return -EINVAL;
}
pmd_regs = cbe_get_cpu_pmd_regs(dev->id);
if (!pmd_regs) {
pr_info("invalid CBE regs pointer for cbe_thermal\n");
return -EINVAL;
}
out_be64(&pmd_regs->tm_str2, str2);
out_be64(&pmd_regs->tm_str1.val, str1.val);
out_be64(&pmd_regs->tm_tpr.val, tpr.val);
out_be64(&pmd_regs->tm_cr1.val, cr1.val);
out_be64(&pmd_regs->tm_cr2, cr2);
}
return 0;
}
static int __init thermal_init(void)
{
int rc = init_default_values();
if (rc == 0) {
spu_add_dev_attr_group(&spu_attribute_group);
cpu_add_dev_attr_group(&ppe_attribute_group);
}
return rc;
}
module_init(thermal_init);
static void __exit thermal_exit(void)
{
spu_remove_dev_attr_group(&spu_attribute_group);
cpu_remove_dev_attr_group(&ppe_attribute_group);
}
module_exit(thermal_exit);
MODULE_DESCRIPTION("Cell processor thermal driver");
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Christian Krafft <krafft@de.ibm.com>");

@ -1,15 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* Cell Platform common data structures
*
* Copyright 2015, Daniel Axtens, IBM Corporation
*/
#ifndef CELL_H
#define CELL_H
#include <asm/pci-bridge.h>
extern struct pci_controller_ops cell_pci_controller_ops;
#endif

@ -1,134 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* spu aware cpufreq governor for the cell processor
*
* © Copyright IBM Corporation 2006-2008
*
* Author: Christian Krafft <krafft@de.ibm.com>
*/
#include <linux/cpufreq.h>
#include <linux/sched.h>
#include <linux/sched/loadavg.h>
#include <linux/module.h>
#include <linux/timer.h>
#include <linux/workqueue.h>
#include <linux/atomic.h>
#include <asm/machdep.h>
#include <asm/spu.h>
#define POLL_TIME 100000 /* in µs */
#define EXP 753 /* exp(-1) in fixed-point */
struct spu_gov_info_struct {
unsigned long busy_spus; /* fixed-point */
struct cpufreq_policy *policy;
struct delayed_work work;
unsigned int poll_int; /* µs */
};
static DEFINE_PER_CPU(struct spu_gov_info_struct, spu_gov_info);
static int calc_freq(struct spu_gov_info_struct *info)
{
int cpu;
int busy_spus;
cpu = info->policy->cpu;
busy_spus = atomic_read(&cbe_spu_info[cpu_to_node(cpu)].busy_spus);
info->busy_spus = calc_load(info->busy_spus, EXP, busy_spus * FIXED_1);
pr_debug("cpu %d: busy_spus=%d, info->busy_spus=%ld\n",
cpu, busy_spus, info->busy_spus);
return info->policy->max * info->busy_spus / FIXED_1;
}
static void spu_gov_work(struct work_struct *work)
{
struct spu_gov_info_struct *info;
int delay;
unsigned long target_freq;
info = container_of(work, struct spu_gov_info_struct, work.work);
/* after cancel_delayed_work_sync we unset info->policy */
BUG_ON(info->policy == NULL);
target_freq = calc_freq(info);
__cpufreq_driver_target(info->policy, target_freq, CPUFREQ_RELATION_H);
delay = usecs_to_jiffies(info->poll_int);
schedule_delayed_work_on(info->policy->cpu, &info->work, delay);
}
static void spu_gov_init_work(struct spu_gov_info_struct *info)
{
int delay = usecs_to_jiffies(info->poll_int);
INIT_DEFERRABLE_WORK(&info->work, spu_gov_work);
schedule_delayed_work_on(info->policy->cpu, &info->work, delay);
}
static void spu_gov_cancel_work(struct spu_gov_info_struct *info)
{
cancel_delayed_work_sync(&info->work);
}
static int spu_gov_start(struct cpufreq_policy *policy)
{
unsigned int cpu = policy->cpu;
struct spu_gov_info_struct *info = &per_cpu(spu_gov_info, cpu);
struct spu_gov_info_struct *affected_info;
int i;
if (!cpu_online(cpu)) {
printk(KERN_ERR "cpu %d is not online\n", cpu);
return -EINVAL;
}
if (!policy->cur) {
printk(KERN_ERR "no cpu specified in policy\n");
return -EINVAL;
}
/* initialize spu_gov_info for all affected cpus */
for_each_cpu(i, policy->cpus) {
affected_info = &per_cpu(spu_gov_info, i);
affected_info->policy = policy;
}
info->poll_int = POLL_TIME;
/* setup timer */
spu_gov_init_work(info);
return 0;
}
static void spu_gov_stop(struct cpufreq_policy *policy)
{
unsigned int cpu = policy->cpu;
struct spu_gov_info_struct *info = &per_cpu(spu_gov_info, cpu);
int i;
/* cancel timer */
spu_gov_cancel_work(info);
/* clean spu_gov_info for all affected cpus */
for_each_cpu (i, policy->cpus) {
info = &per_cpu(spu_gov_info, i);
info->policy = NULL;
}
}
static struct cpufreq_governor spu_governor = {
.name = "spudemand",
.start = spu_gov_start,
.stop = spu_gov_stop,
.owner = THIS_MODULE,
};
cpufreq_governor_init(spu_governor);
cpufreq_governor_exit(spu_governor);
MODULE_DESCRIPTION("SPU-aware cpufreq governor for the cell processor");
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Christian Krafft <krafft@de.ibm.com>");

@ -1,390 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Cell Internal Interrupt Controller
*
* Copyright (C) 2006 Benjamin Herrenschmidt (benh@kernel.crashing.org)
* IBM, Corp.
*
* (C) Copyright IBM Deutschland Entwicklung GmbH 2005
*
* Author: Arnd Bergmann <arndb@de.ibm.com>
*
* TODO:
* - Fix various assumptions related to HW CPU numbers vs. linux CPU numbers
* vs node numbers in the setup code
* - Implement proper handling of maxcpus=1/2 (that is, routing of irqs from
* a non-active node to the active node)
*/
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/irqdomain.h>
#include <linux/export.h>
#include <linux/percpu.h>
#include <linux/types.h>
#include <linux/ioport.h>
#include <linux/kernel_stat.h>
#include <linux/pgtable.h>
#include <linux/of_address.h>
#include <asm/io.h>
#include <asm/ptrace.h>
#include <asm/machdep.h>
#include <asm/cell-regs.h>
#include "interrupt.h"
struct iic {
struct cbe_iic_thread_regs __iomem *regs;
u8 target_id;
u8 eoi_stack[16];
int eoi_ptr;
struct device_node *node;
};
static DEFINE_PER_CPU(struct iic, cpu_iic);
#define IIC_NODE_COUNT 2
static struct irq_domain *iic_host;
/* Convert between "pending" bits and hw irq number */
static irq_hw_number_t iic_pending_to_hwnum(struct cbe_iic_pending_bits bits)
{
unsigned char unit = bits.source & 0xf;
unsigned char node = bits.source >> 4;
unsigned char class = bits.class & 3;
/* Decode IPIs */
if (bits.flags & CBE_IIC_IRQ_IPI)
return IIC_IRQ_TYPE_IPI | (bits.prio >> 4);
else
return (node << IIC_IRQ_NODE_SHIFT) | (class << 4) | unit;
}
static void iic_mask(struct irq_data *d)
{
}
static void iic_unmask(struct irq_data *d)
{
}
static void iic_eoi(struct irq_data *d)
{
struct iic *iic = this_cpu_ptr(&cpu_iic);
out_be64(&iic->regs->prio, iic->eoi_stack[--iic->eoi_ptr]);
BUG_ON(iic->eoi_ptr < 0);
}
static struct irq_chip iic_chip = {
.name = "CELL-IIC",
.irq_mask = iic_mask,
.irq_unmask = iic_unmask,
.irq_eoi = iic_eoi,
};
static void iic_ioexc_eoi(struct irq_data *d)
{
}
static void iic_ioexc_cascade(struct irq_desc *desc)
{
struct irq_chip *chip = irq_desc_get_chip(desc);
struct cbe_iic_regs __iomem *node_iic =
(void __iomem *)irq_desc_get_handler_data(desc);
unsigned int irq = irq_desc_get_irq(desc);
unsigned int base = (irq & 0xffffff00) | IIC_IRQ_TYPE_IOEXC;
unsigned long bits, ack;
int cascade;
for (;;) {
bits = in_be64(&node_iic->iic_is);
if (bits == 0)
break;
/* pre-ack edge interrupts */
ack = bits & IIC_ISR_EDGE_MASK;
if (ack)
out_be64(&node_iic->iic_is, ack);
/* handle them */
for (cascade = 63; cascade >= 0; cascade--)
if (bits & (0x8000000000000000UL >> cascade))
generic_handle_domain_irq(iic_host,
base | cascade);
/* post-ack level interrupts */
ack = bits & ~IIC_ISR_EDGE_MASK;
if (ack)
out_be64(&node_iic->iic_is, ack);
}
chip->irq_eoi(&desc->irq_data);
}
static struct irq_chip iic_ioexc_chip = {
.name = "CELL-IOEX",
.irq_mask = iic_mask,
.irq_unmask = iic_unmask,
.irq_eoi = iic_ioexc_eoi,
};
/* Get an IRQ number from the pending state register of the IIC */
static unsigned int iic_get_irq(void)
{
struct cbe_iic_pending_bits pending;
struct iic *iic;
unsigned int virq;
iic = this_cpu_ptr(&cpu_iic);
*(unsigned long *) &pending =
in_be64((u64 __iomem *) &iic->regs->pending_destr);
if (!(pending.flags & CBE_IIC_IRQ_VALID))
return 0;
virq = irq_linear_revmap(iic_host, iic_pending_to_hwnum(pending));
if (!virq)
return 0;
iic->eoi_stack[++iic->eoi_ptr] = pending.prio;
BUG_ON(iic->eoi_ptr > 15);
return virq;
}
void iic_setup_cpu(void)
{
out_be64(&this_cpu_ptr(&cpu_iic)->regs->prio, 0xff);
}
u8 iic_get_target_id(int cpu)
{
return per_cpu(cpu_iic, cpu).target_id;
}
EXPORT_SYMBOL_GPL(iic_get_target_id);
#ifdef CONFIG_SMP
/* Use the highest interrupt priorities for IPI */
static inline int iic_msg_to_irq(int msg)
{
return IIC_IRQ_TYPE_IPI + 0xf - msg;
}
void iic_message_pass(int cpu, int msg)
{
out_be64(&per_cpu(cpu_iic, cpu).regs->generate, (0xf - msg) << 4);
}
static void iic_request_ipi(int msg)
{
int virq;
virq = irq_create_mapping(iic_host, iic_msg_to_irq(msg));
if (!virq) {
printk(KERN_ERR
"iic: failed to map IPI %s\n", smp_ipi_name[msg]);
return;
}
/*
* If smp_request_message_ipi encounters an error it will notify
* the error. If a message is not needed it will return non-zero.
*/
if (smp_request_message_ipi(virq, msg))
irq_dispose_mapping(virq);
}
void iic_request_IPIs(void)
{
iic_request_ipi(PPC_MSG_CALL_FUNCTION);
iic_request_ipi(PPC_MSG_RESCHEDULE);
iic_request_ipi(PPC_MSG_TICK_BROADCAST);
iic_request_ipi(PPC_MSG_NMI_IPI);
}
#endif /* CONFIG_SMP */
static int iic_host_match(struct irq_domain *h, struct device_node *node,
enum irq_domain_bus_token bus_token)
{
return of_device_is_compatible(node,
"IBM,CBEA-Internal-Interrupt-Controller");
}
static int iic_host_map(struct irq_domain *h, unsigned int virq,
irq_hw_number_t hw)
{
switch (hw & IIC_IRQ_TYPE_MASK) {
case IIC_IRQ_TYPE_IPI:
irq_set_chip_and_handler(virq, &iic_chip, handle_percpu_irq);
break;
case IIC_IRQ_TYPE_IOEXC:
irq_set_chip_and_handler(virq, &iic_ioexc_chip,
handle_edge_eoi_irq);
break;
default:
irq_set_chip_and_handler(virq, &iic_chip, handle_edge_eoi_irq);
}
return 0;
}
static int iic_host_xlate(struct irq_domain *h, struct device_node *ct,
const u32 *intspec, unsigned int intsize,
irq_hw_number_t *out_hwirq, unsigned int *out_flags)
{
unsigned int node, ext, unit, class;
const u32 *val;
if (!of_device_is_compatible(ct,
"IBM,CBEA-Internal-Interrupt-Controller"))
return -ENODEV;
if (intsize != 1)
return -ENODEV;
val = of_get_property(ct, "#interrupt-cells", NULL);
if (val == NULL || *val != 1)
return -ENODEV;
node = intspec[0] >> 24;
ext = (intspec[0] >> 16) & 0xff;
class = (intspec[0] >> 8) & 0xff;
unit = intspec[0] & 0xff;
/* Check if node is in supported range */
if (node > 1)
return -EINVAL;
/* Build up interrupt number, special case for IO exceptions */
*out_hwirq = (node << IIC_IRQ_NODE_SHIFT);
if (unit == IIC_UNIT_IIC && class == 1)
*out_hwirq |= IIC_IRQ_TYPE_IOEXC | ext;
else
*out_hwirq |= IIC_IRQ_TYPE_NORMAL |
(class << IIC_IRQ_CLASS_SHIFT) | unit;
/* Dummy flags, ignored by iic code */
*out_flags = IRQ_TYPE_EDGE_RISING;
return 0;
}
static const struct irq_domain_ops iic_host_ops = {
.match = iic_host_match,
.map = iic_host_map,
.xlate = iic_host_xlate,
};
static void __init init_one_iic(unsigned int hw_cpu, unsigned long addr,
struct device_node *node)
{
/* XXX FIXME: should locate the linux CPU number from the HW cpu
* number properly. We are lucky for now
*/
struct iic *iic = &per_cpu(cpu_iic, hw_cpu);
iic->regs = ioremap(addr, sizeof(struct cbe_iic_thread_regs));
BUG_ON(iic->regs == NULL);
iic->target_id = ((hw_cpu & 2) << 3) | ((hw_cpu & 1) ? 0xf : 0xe);
iic->eoi_stack[0] = 0xff;
iic->node = of_node_get(node);
out_be64(&iic->regs->prio, 0);
printk(KERN_INFO "IIC for CPU %d target id 0x%x : %pOF\n",
hw_cpu, iic->target_id, node);
}
static int __init setup_iic(void)
{
struct device_node *dn;
struct resource r0, r1;
unsigned int node, cascade, found = 0;
struct cbe_iic_regs __iomem *node_iic;
const u32 *np;
for_each_node_by_name(dn, "interrupt-controller") {
if (!of_device_is_compatible(dn,
"IBM,CBEA-Internal-Interrupt-Controller"))
continue;
np = of_get_property(dn, "ibm,interrupt-server-ranges", NULL);
if (np == NULL) {
printk(KERN_WARNING "IIC: CPU association not found\n");
of_node_put(dn);
return -ENODEV;
}
if (of_address_to_resource(dn, 0, &r0) ||
of_address_to_resource(dn, 1, &r1)) {
printk(KERN_WARNING "IIC: Can't resolve addresses\n");
of_node_put(dn);
return -ENODEV;
}
found++;
init_one_iic(np[0], r0.start, dn);
init_one_iic(np[1], r1.start, dn);
/* Setup cascade for IO exceptions. XXX cleanup tricks to get
* node vs CPU etc...
* Note that we configure the IIC_IRR here with a hard coded
* priority of 1. We might want to improve that later.
*/
node = np[0] >> 1;
node_iic = cbe_get_cpu_iic_regs(np[0]);
cascade = node << IIC_IRQ_NODE_SHIFT;
cascade |= 1 << IIC_IRQ_CLASS_SHIFT;
cascade |= IIC_UNIT_IIC;
cascade = irq_create_mapping(iic_host, cascade);
if (!cascade)
continue;
/*
* irq_data is a generic pointer that gets passed back
* to us later, so the forced cast is fine.
*/
irq_set_handler_data(cascade, (void __force *)node_iic);
irq_set_chained_handler(cascade, iic_ioexc_cascade);
out_be64(&node_iic->iic_ir,
(1 << 12) /* priority */ |
(node << 4) /* dest node */ |
IIC_UNIT_THREAD_0 /* route them to thread 0 */);
/* Flush pending (make sure it triggers if there is
* anything pending
*/
out_be64(&node_iic->iic_is, 0xfffffffffffffffful);
}
if (found)
return 0;
else
return -ENODEV;
}
void __init iic_init_IRQ(void)
{
/* Setup an irq host data structure */
iic_host = irq_domain_add_linear(NULL, IIC_SOURCE_COUNT, &iic_host_ops,
NULL);
BUG_ON(iic_host == NULL);
irq_set_default_host(iic_host);
/* Discover and initialize iics */
if (setup_iic() < 0)
panic("IIC: Failed to initialize !\n");
/* Set master interrupt handling function */
ppc_md.get_irq = iic_get_irq;
/* Enable on current CPU */
iic_setup_cpu();
}
void iic_set_interrupt_routing(int cpu, int thread, int priority)
{
struct cbe_iic_regs __iomem *iic_regs = cbe_get_cpu_iic_regs(cpu);
u64 iic_ir = 0;
int node = cpu >> 1;
/* Set which node and thread will handle the next interrupt */
iic_ir |= CBE_IIC_IR_PRIO(priority) |
CBE_IIC_IR_DEST_NODE(node);
if (thread == 0)
iic_ir |= CBE_IIC_IR_DEST_UNIT(CBE_IIC_IR_PT_0);
else
iic_ir |= CBE_IIC_IR_DEST_UNIT(CBE_IIC_IR_PT_1);
out_be64(&iic_regs->iic_ir, iic_ir);
}

@ -1,90 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef ASM_CELL_PIC_H
#define ASM_CELL_PIC_H
#ifdef __KERNEL__
/*
* Mapping of IIC pending bits into per-node interrupt numbers.
*
* Interrupt numbers are in the range 0...0x1ff where the top bit
* (0x100) represent the source node. Only 2 nodes are supported with
* the current code though it's trivial to extend that if necessary using
* higher level bits
*
* The bottom 8 bits are split into 2 type bits and 6 data bits that
* depend on the type:
*
* 00 (0x00 | data) : normal interrupt. data is (class << 4) | source
* 01 (0x40 | data) : IO exception. data is the exception number as
* defined by bit numbers in IIC_SR
* 10 (0x80 | data) : IPI. data is the IPI number (obtained from the priority)
* and node is always 0 (IPIs are per-cpu, their source is
* not relevant)
* 11 (0xc0 | data) : reserved
*
* In addition, interrupt number 0x80000000 is defined as always invalid
* (that is the node field is expected to never extend to move than 23 bits)
*
*/
enum {
IIC_IRQ_INVALID = 0x80000000u,
IIC_IRQ_NODE_MASK = 0x100,
IIC_IRQ_NODE_SHIFT = 8,
IIC_IRQ_MAX = 0x1ff,
IIC_IRQ_TYPE_MASK = 0xc0,
IIC_IRQ_TYPE_NORMAL = 0x00,
IIC_IRQ_TYPE_IOEXC = 0x40,
IIC_IRQ_TYPE_IPI = 0x80,
IIC_IRQ_CLASS_SHIFT = 4,
IIC_IRQ_CLASS_0 = 0x00,
IIC_IRQ_CLASS_1 = 0x10,
IIC_IRQ_CLASS_2 = 0x20,
IIC_SOURCE_COUNT = 0x200,
/* Here are defined the various source/dest units. Avoid using those
* definitions if you can, they are mostly here for reference
*/
IIC_UNIT_SPU_0 = 0x4,
IIC_UNIT_SPU_1 = 0x7,
IIC_UNIT_SPU_2 = 0x3,
IIC_UNIT_SPU_3 = 0x8,
IIC_UNIT_SPU_4 = 0x2,
IIC_UNIT_SPU_5 = 0x9,
IIC_UNIT_SPU_6 = 0x1,
IIC_UNIT_SPU_7 = 0xa,
IIC_UNIT_IOC_0 = 0x0,
IIC_UNIT_IOC_1 = 0xb,
IIC_UNIT_THREAD_0 = 0xe, /* target only */
IIC_UNIT_THREAD_1 = 0xf, /* target only */
IIC_UNIT_IIC = 0xe, /* source only (IO exceptions) */
/* Base numbers for the external interrupts */
IIC_IRQ_EXT_IOIF0 =
IIC_IRQ_TYPE_NORMAL | IIC_IRQ_CLASS_2 | IIC_UNIT_IOC_0,
IIC_IRQ_EXT_IOIF1 =
IIC_IRQ_TYPE_NORMAL | IIC_IRQ_CLASS_2 | IIC_UNIT_IOC_1,
/* Base numbers for the IIC_ISR interrupts */
IIC_IRQ_IOEX_TMI = IIC_IRQ_TYPE_IOEXC | IIC_IRQ_CLASS_1 | 63,
IIC_IRQ_IOEX_PMI = IIC_IRQ_TYPE_IOEXC | IIC_IRQ_CLASS_1 | 62,
IIC_IRQ_IOEX_ATI = IIC_IRQ_TYPE_IOEXC | IIC_IRQ_CLASS_1 | 61,
IIC_IRQ_IOEX_MATBFI = IIC_IRQ_TYPE_IOEXC | IIC_IRQ_CLASS_1 | 60,
IIC_IRQ_IOEX_ELDI = IIC_IRQ_TYPE_IOEXC | IIC_IRQ_CLASS_1 | 59,
/* Which bits in IIC_ISR are edge sensitive */
IIC_ISR_EDGE_MASK = 0x4ul,
};
extern void iic_init_IRQ(void);
extern void iic_message_pass(int cpu, int msg);
extern void iic_request_IPIs(void);
extern void iic_setup_cpu(void);
extern u8 iic_get_target_id(int cpu);
extern void spider_init_IRQ(void);
extern void iic_set_interrupt_routing(int cpu, int thread, int priority);
#endif
#endif /* ASM_CELL_PIC_H */

File diff suppressed because it is too large Load Diff

@ -1,125 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* CBE Pervasive Monitor and Debug
*
* (C) Copyright IBM Corporation 2005
*
* Authors: Maximino Aguilar (maguilar@us.ibm.com)
* Michael N. Day (mnday@us.ibm.com)
*/
#undef DEBUG
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/percpu.h>
#include <linux/types.h>
#include <linux/kallsyms.h>
#include <linux/pgtable.h>
#include <asm/io.h>
#include <asm/machdep.h>
#include <asm/reg.h>
#include <asm/cell-regs.h>
#include <asm/cpu_has_feature.h>
#include "pervasive.h"
#include "ras.h"
static void cbe_power_save(void)
{
unsigned long ctrl, thread_switch_control;
/* Ensure our interrupt state is properly tracked */
if (!prep_irq_for_idle())
return;
ctrl = mfspr(SPRN_CTRLF);
/* Enable DEC and EE interrupt request */
thread_switch_control = mfspr(SPRN_TSC_CELL);
thread_switch_control |= TSC_CELL_EE_ENABLE | TSC_CELL_EE_BOOST;
switch (ctrl & CTRL_CT) {
case CTRL_CT0:
thread_switch_control |= TSC_CELL_DEC_ENABLE_0;
break;
case CTRL_CT1:
thread_switch_control |= TSC_CELL_DEC_ENABLE_1;
break;
default:
printk(KERN_WARNING "%s: unknown configuration\n",
__func__);
break;
}
mtspr(SPRN_TSC_CELL, thread_switch_control);
/*
* go into low thread priority, medium priority will be
* restored for us after wake-up.
*/
HMT_low();
/*
* atomically disable thread execution and runlatch.
* External and Decrementer exceptions are still handled when the
* thread is disabled but now enter in cbe_system_reset_exception()
*/
ctrl &= ~(CTRL_RUNLATCH | CTRL_TE);
mtspr(SPRN_CTRLT, ctrl);
/* Re-enable interrupts in MSR */
__hard_irq_enable();
}
static int cbe_system_reset_exception(struct pt_regs *regs)
{
switch (regs->msr & SRR1_WAKEMASK) {
case SRR1_WAKEDEC:
set_dec(1);
break;
case SRR1_WAKEEE:
/*
* Handle these when interrupts get re-enabled and we take
* them as regular exceptions. We are in an NMI context
* and can't handle these here.
*/
break;
case SRR1_WAKEMT:
return cbe_sysreset_hack();
#ifdef CONFIG_CBE_RAS
case SRR1_WAKESYSERR:
cbe_system_error_exception(regs);
break;
case SRR1_WAKETHERM:
cbe_thermal_exception(regs);
break;
#endif /* CONFIG_CBE_RAS */
default:
/* do system reset */
return 0;
}
/* everything handled */
return 1;
}
void __init cbe_pervasive_init(void)
{
int cpu;
if (!cpu_has_feature(CPU_FTR_PAUSE_ZERO))
return;
for_each_possible_cpu(cpu) {
struct cbe_pmd_regs __iomem *regs = cbe_get_cpu_pmd_regs(cpu);
if (!regs)
continue;
/* Enable Pause(0) control bit */
out_be64(&regs->pmcr, in_be64(&regs->pmcr) |
CBE_PMD_PAUSE_ZERO_CONTROL);
}
ppc_md.power_save = cbe_power_save;
ppc_md.system_reset_exception = cbe_system_reset_exception;
}

@ -1,26 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* Cell Pervasive Monitor and Debug interface and HW structures
*
* (C) Copyright IBM Corporation 2005
*
* Authors: Maximino Aguilar (maguilar@us.ibm.com)
* David J. Erb (djerb@us.ibm.com)
*/
#ifndef PERVASIVE_H
#define PERVASIVE_H
extern void cbe_pervasive_init(void);
#ifdef CONFIG_PPC_IBM_CELL_RESETBUTTON
extern int cbe_sysreset_hack(void);
#else
static inline int cbe_sysreset_hack(void)
{
return 1;
}
#endif /* CONFIG_PPC_IBM_CELL_RESETBUTTON */
#endif

@ -1,412 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Cell Broadband Engine Performance Monitor
*
* (C) Copyright IBM Corporation 2001,2006
*
* Author:
* David Erb (djerb@us.ibm.com)
* Kevin Corry (kevcorry@us.ibm.com)
*/
#include <linux/interrupt.h>
#include <linux/irqdomain.h>
#include <linux/types.h>
#include <linux/export.h>
#include <asm/io.h>
#include <asm/irq_regs.h>
#include <asm/machdep.h>
#include <asm/pmc.h>
#include <asm/reg.h>
#include <asm/spu.h>
#include <asm/cell-regs.h>
#include "interrupt.h"
/*
* When writing to write-only mmio addresses, save a shadow copy. All of the
* registers are 32-bit, but stored in the upper-half of a 64-bit field in
* pmd_regs.
*/
#define WRITE_WO_MMIO(reg, x) \
do { \
u32 _x = (x); \
struct cbe_pmd_regs __iomem *pmd_regs; \
struct cbe_pmd_shadow_regs *shadow_regs; \
pmd_regs = cbe_get_cpu_pmd_regs(cpu); \
shadow_regs = cbe_get_cpu_pmd_shadow_regs(cpu); \
out_be64(&(pmd_regs->reg), (((u64)_x) << 32)); \
shadow_regs->reg = _x; \
} while (0)
#define READ_SHADOW_REG(val, reg) \
do { \
struct cbe_pmd_shadow_regs *shadow_regs; \
shadow_regs = cbe_get_cpu_pmd_shadow_regs(cpu); \
(val) = shadow_regs->reg; \
} while (0)
#define READ_MMIO_UPPER32(val, reg) \
do { \
struct cbe_pmd_regs __iomem *pmd_regs; \
pmd_regs = cbe_get_cpu_pmd_regs(cpu); \
(val) = (u32)(in_be64(&pmd_regs->reg) >> 32); \
} while (0)
/*
* Physical counter registers.
* Each physical counter can act as one 32-bit counter or two 16-bit counters.
*/
u32 cbe_read_phys_ctr(u32 cpu, u32 phys_ctr)
{
u32 val_in_latch, val = 0;
if (phys_ctr < NR_PHYS_CTRS) {
READ_SHADOW_REG(val_in_latch, counter_value_in_latch);
/* Read the latch or the actual counter, whichever is newer. */
if (val_in_latch & (1 << phys_ctr)) {
READ_SHADOW_REG(val, pm_ctr[phys_ctr]);
} else {
READ_MMIO_UPPER32(val, pm_ctr[phys_ctr]);
}
}
return val;
}
EXPORT_SYMBOL_GPL(cbe_read_phys_ctr);
void cbe_write_phys_ctr(u32 cpu, u32 phys_ctr, u32 val)
{
struct cbe_pmd_shadow_regs *shadow_regs;
u32 pm_ctrl;
if (phys_ctr < NR_PHYS_CTRS) {
/* Writing to a counter only writes to a hardware latch.
* The new value is not propagated to the actual counter
* until the performance monitor is enabled.
*/
WRITE_WO_MMIO(pm_ctr[phys_ctr], val);
pm_ctrl = cbe_read_pm(cpu, pm_control);
if (pm_ctrl & CBE_PM_ENABLE_PERF_MON) {
/* The counters are already active, so we need to
* rewrite the pm_control register to "re-enable"
* the PMU.
*/
cbe_write_pm(cpu, pm_control, pm_ctrl);
} else {
shadow_regs = cbe_get_cpu_pmd_shadow_regs(cpu);
shadow_regs->counter_value_in_latch |= (1 << phys_ctr);
}
}
}
EXPORT_SYMBOL_GPL(cbe_write_phys_ctr);
/*
* "Logical" counter registers.
* These will read/write 16-bits or 32-bits depending on the
* current size of the counter. Counters 4 - 7 are always 16-bit.
*/
u32 cbe_read_ctr(u32 cpu, u32 ctr)
{
u32 val;
u32 phys_ctr = ctr & (NR_PHYS_CTRS - 1);
val = cbe_read_phys_ctr(cpu, phys_ctr);
if (cbe_get_ctr_size(cpu, phys_ctr) == 16)
val = (ctr < NR_PHYS_CTRS) ? (val >> 16) : (val & 0xffff);
return val;
}
EXPORT_SYMBOL_GPL(cbe_read_ctr);
void cbe_write_ctr(u32 cpu, u32 ctr, u32 val)
{
u32 phys_ctr;
u32 phys_val;
phys_ctr = ctr & (NR_PHYS_CTRS - 1);
if (cbe_get_ctr_size(cpu, phys_ctr) == 16) {
phys_val = cbe_read_phys_ctr(cpu, phys_ctr);
if (ctr < NR_PHYS_CTRS)
val = (val << 16) | (phys_val & 0xffff);
else
val = (val & 0xffff) | (phys_val & 0xffff0000);
}
cbe_write_phys_ctr(cpu, phys_ctr, val);
}
EXPORT_SYMBOL_GPL(cbe_write_ctr);
/*
* Counter-control registers.
* Each "logical" counter has a corresponding control register.
*/
u32 cbe_read_pm07_control(u32 cpu, u32 ctr)
{
u32 pm07_control = 0;
if (ctr < NR_CTRS)
READ_SHADOW_REG(pm07_control, pm07_control[ctr]);
return pm07_control;
}
EXPORT_SYMBOL_GPL(cbe_read_pm07_control);
void cbe_write_pm07_control(u32 cpu, u32 ctr, u32 val)
{
if (ctr < NR_CTRS)
WRITE_WO_MMIO(pm07_control[ctr], val);
}
EXPORT_SYMBOL_GPL(cbe_write_pm07_control);
/*
* Other PMU control registers. Most of these are write-only.
*/
u32 cbe_read_pm(u32 cpu, enum pm_reg_name reg)
{
u32 val = 0;
switch (reg) {
case group_control:
READ_SHADOW_REG(val, group_control);
break;
case debug_bus_control:
READ_SHADOW_REG(val, debug_bus_control);
break;
case trace_address:
READ_MMIO_UPPER32(val, trace_address);
break;
case ext_tr_timer:
READ_SHADOW_REG(val, ext_tr_timer);
break;
case pm_status:
READ_MMIO_UPPER32(val, pm_status);
break;
case pm_control:
READ_SHADOW_REG(val, pm_control);
break;
case pm_interval:
READ_MMIO_UPPER32(val, pm_interval);
break;
case pm_start_stop:
READ_SHADOW_REG(val, pm_start_stop);
break;
}
return val;
}
EXPORT_SYMBOL_GPL(cbe_read_pm);
void cbe_write_pm(u32 cpu, enum pm_reg_name reg, u32 val)
{
switch (reg) {
case group_control:
WRITE_WO_MMIO(group_control, val);
break;
case debug_bus_control:
WRITE_WO_MMIO(debug_bus_control, val);
break;
case trace_address:
WRITE_WO_MMIO(trace_address, val);
break;
case ext_tr_timer:
WRITE_WO_MMIO(ext_tr_timer, val);
break;
case pm_status:
WRITE_WO_MMIO(pm_status, val);
break;
case pm_control:
WRITE_WO_MMIO(pm_control, val);
break;
case pm_interval:
WRITE_WO_MMIO(pm_interval, val);
break;
case pm_start_stop:
WRITE_WO_MMIO(pm_start_stop, val);
break;
}
}
EXPORT_SYMBOL_GPL(cbe_write_pm);
/*
* Get/set the size of a physical counter to either 16 or 32 bits.
*/
u32 cbe_get_ctr_size(u32 cpu, u32 phys_ctr)
{
u32 pm_ctrl, size = 0;
if (phys_ctr < NR_PHYS_CTRS) {
pm_ctrl = cbe_read_pm(cpu, pm_control);
size = (pm_ctrl & CBE_PM_16BIT_CTR(phys_ctr)) ? 16 : 32;
}
return size;
}
EXPORT_SYMBOL_GPL(cbe_get_ctr_size);
void cbe_set_ctr_size(u32 cpu, u32 phys_ctr, u32 ctr_size)
{
u32 pm_ctrl;
if (phys_ctr < NR_PHYS_CTRS) {
pm_ctrl = cbe_read_pm(cpu, pm_control);
switch (ctr_size) {
case 16:
pm_ctrl |= CBE_PM_16BIT_CTR(phys_ctr);
break;
case 32:
pm_ctrl &= ~CBE_PM_16BIT_CTR(phys_ctr);
break;
}
cbe_write_pm(cpu, pm_control, pm_ctrl);
}
}
EXPORT_SYMBOL_GPL(cbe_set_ctr_size);
/*
* Enable/disable the entire performance monitoring unit.
* When we enable the PMU, all pending writes to counters get committed.
*/
void cbe_enable_pm(u32 cpu)
{
struct cbe_pmd_shadow_regs *shadow_regs;
u32 pm_ctrl;
shadow_regs = cbe_get_cpu_pmd_shadow_regs(cpu);
shadow_regs->counter_value_in_latch = 0;
pm_ctrl = cbe_read_pm(cpu, pm_control) | CBE_PM_ENABLE_PERF_MON;
cbe_write_pm(cpu, pm_control, pm_ctrl);
}
EXPORT_SYMBOL_GPL(cbe_enable_pm);
void cbe_disable_pm(u32 cpu)
{
u32 pm_ctrl;
pm_ctrl = cbe_read_pm(cpu, pm_control) & ~CBE_PM_ENABLE_PERF_MON;
cbe_write_pm(cpu, pm_control, pm_ctrl);
}
EXPORT_SYMBOL_GPL(cbe_disable_pm);
/*
* Reading from the trace_buffer.
* The trace buffer is two 64-bit registers. Reading from
* the second half automatically increments the trace_address.
*/
void cbe_read_trace_buffer(u32 cpu, u64 *buf)
{
struct cbe_pmd_regs __iomem *pmd_regs = cbe_get_cpu_pmd_regs(cpu);
*buf++ = in_be64(&pmd_regs->trace_buffer_0_63);
*buf++ = in_be64(&pmd_regs->trace_buffer_64_127);
}
EXPORT_SYMBOL_GPL(cbe_read_trace_buffer);
/*
* Enabling/disabling interrupts for the entire performance monitoring unit.
*/
u32 cbe_get_and_clear_pm_interrupts(u32 cpu)
{
/* Reading pm_status clears the interrupt bits. */
return cbe_read_pm(cpu, pm_status);
}
EXPORT_SYMBOL_GPL(cbe_get_and_clear_pm_interrupts);
void cbe_enable_pm_interrupts(u32 cpu, u32 thread, u32 mask)
{
/* Set which node and thread will handle the next interrupt. */
iic_set_interrupt_routing(cpu, thread, 0);
/* Enable the interrupt bits in the pm_status register. */
if (mask)
cbe_write_pm(cpu, pm_status, mask);
}
EXPORT_SYMBOL_GPL(cbe_enable_pm_interrupts);
void cbe_disable_pm_interrupts(u32 cpu)
{
cbe_get_and_clear_pm_interrupts(cpu);
cbe_write_pm(cpu, pm_status, 0);
}
EXPORT_SYMBOL_GPL(cbe_disable_pm_interrupts);
static irqreturn_t cbe_pm_irq(int irq, void *dev_id)
{
perf_irq(get_irq_regs());
return IRQ_HANDLED;
}
static int __init cbe_init_pm_irq(void)
{
unsigned int irq;
int rc, node;
for_each_online_node(node) {
irq = irq_create_mapping(NULL, IIC_IRQ_IOEX_PMI |
(node << IIC_IRQ_NODE_SHIFT));
if (!irq) {
printk("ERROR: Unable to allocate irq for node %d\n",
node);
return -EINVAL;
}
rc = request_irq(irq, cbe_pm_irq,
0, "cbe-pmu-0", NULL);
if (rc) {
printk("ERROR: Request for irq on node %d failed\n",
node);
return rc;
}
}
return 0;
}
machine_arch_initcall(cell, cbe_init_pm_irq);
void cbe_sync_irq(int node)
{
unsigned int irq;
irq = irq_find_mapping(NULL,
IIC_IRQ_IOEX_PMI
| (node << IIC_IRQ_NODE_SHIFT));
if (!irq) {
printk(KERN_WARNING "ERROR, unable to get existing irq %d " \
"for node %d\n", irq, node);
return;
}
synchronize_irq(irq);
}
EXPORT_SYMBOL_GPL(cbe_sync_irq);

@ -1,352 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Copyright 2006-2008, IBM Corporation.
*/
#undef DEBUG
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/slab.h>
#include <linux/smp.h>
#include <linux/reboot.h>
#include <linux/kexec.h>
#include <linux/crash_dump.h>
#include <linux/of.h>
#include <asm/kexec.h>
#include <asm/reg.h>
#include <asm/io.h>
#include <asm/machdep.h>
#include <asm/rtas.h>
#include <asm/cell-regs.h>
#include "ras.h"
#include "pervasive.h"
static void dump_fir(int cpu)
{
struct cbe_pmd_regs __iomem *pregs = cbe_get_cpu_pmd_regs(cpu);
struct cbe_iic_regs __iomem *iregs = cbe_get_cpu_iic_regs(cpu);
if (pregs == NULL)
return;
/* Todo: do some nicer parsing of bits and based on them go down
* to other sub-units FIRs and not only IIC
*/
printk(KERN_ERR "Global Checkstop FIR : 0x%016llx\n",
in_be64(&pregs->checkstop_fir));
printk(KERN_ERR "Global Recoverable FIR : 0x%016llx\n",
in_be64(&pregs->checkstop_fir));
printk(KERN_ERR "Global MachineCheck FIR : 0x%016llx\n",
in_be64(&pregs->spec_att_mchk_fir));
if (iregs == NULL)
return;
printk(KERN_ERR "IOC FIR : 0x%016llx\n",
in_be64(&iregs->ioc_fir));
}
DEFINE_INTERRUPT_HANDLER(cbe_system_error_exception)
{
int cpu = smp_processor_id();
printk(KERN_ERR "System Error Interrupt on CPU %d !\n", cpu);
dump_fir(cpu);
dump_stack();
}
DEFINE_INTERRUPT_HANDLER(cbe_maintenance_exception)
{
int cpu = smp_processor_id();
/*
* Nothing implemented for the maintenance interrupt at this point
*/
printk(KERN_ERR "Unhandled Maintenance interrupt on CPU %d !\n", cpu);
dump_stack();
}
DEFINE_INTERRUPT_HANDLER(cbe_thermal_exception)
{
int cpu = smp_processor_id();
/*
* Nothing implemented for the thermal interrupt at this point
*/
printk(KERN_ERR "Unhandled Thermal interrupt on CPU %d !\n", cpu);
dump_stack();
}
static int cbe_machine_check_handler(struct pt_regs *regs)
{
int cpu = smp_processor_id();
printk(KERN_ERR "Machine Check Interrupt on CPU %d !\n", cpu);
dump_fir(cpu);
/* No recovery from this code now, lets continue */
return 0;
}
struct ptcal_area {
struct list_head list;
int nid;
int order;
struct page *pages;
};
static LIST_HEAD(ptcal_list);
static int ptcal_start_tok, ptcal_stop_tok;
static int __init cbe_ptcal_enable_on_node(int nid, int order)
{
struct ptcal_area *area;
int ret = -ENOMEM;
unsigned long addr;
if (is_kdump_kernel())
rtas_call(ptcal_stop_tok, 1, 1, NULL, nid);
area = kmalloc(sizeof(*area), GFP_KERNEL);
if (!area)
goto out_err;
area->nid = nid;
area->order = order;
area->pages = __alloc_pages_node(area->nid,
GFP_KERNEL|__GFP_THISNODE,
area->order);
if (!area->pages) {
printk(KERN_WARNING "%s: no page on node %d\n",
__func__, area->nid);
goto out_free_area;
}
/*
* We move the ptcal area to the middle of the allocated
* page, in order to avoid prefetches in memcpy and similar
* functions stepping on it.
*/
addr = __pa(page_address(area->pages)) + (PAGE_SIZE >> 1);
printk(KERN_DEBUG "%s: enabling PTCAL on node %d address=0x%016lx\n",
__func__, area->nid, addr);
ret = -EIO;
if (rtas_call(ptcal_start_tok, 3, 1, NULL, area->nid,
(unsigned int)(addr >> 32),
(unsigned int)(addr & 0xffffffff))) {
printk(KERN_ERR "%s: error enabling PTCAL on node %d!\n",
__func__, nid);
goto out_free_pages;
}
list_add(&area->list, &ptcal_list);
return 0;
out_free_pages:
__free_pages(area->pages, area->order);
out_free_area:
kfree(area);
out_err:
return ret;
}
static int __init cbe_ptcal_enable(void)
{
const u32 *size;
struct device_node *np;
int order, found_mic = 0;
np = of_find_node_by_path("/rtas");
if (!np)
return -ENODEV;
size = of_get_property(np, "ibm,cbe-ptcal-size", NULL);
if (!size) {
of_node_put(np);
return -ENODEV;
}
pr_debug("%s: enabling PTCAL, size = 0x%x\n", __func__, *size);
order = get_order(*size);
of_node_put(np);
/* support for malta device trees, with be@/mic@ nodes */
for_each_node_by_type(np, "mic-tm") {
cbe_ptcal_enable_on_node(of_node_to_nid(np), order);
found_mic = 1;
}
if (found_mic)
return 0;
/* support for older device tree - use cpu nodes */
for_each_node_by_type(np, "cpu") {
const u32 *nid = of_get_property(np, "node-id", NULL);
if (!nid) {
printk(KERN_ERR "%s: node %pOF is missing node-id?\n",
__func__, np);
continue;
}
cbe_ptcal_enable_on_node(*nid, order);
found_mic = 1;
}
return found_mic ? 0 : -ENODEV;
}
static int cbe_ptcal_disable(void)
{
struct ptcal_area *area, *tmp;
int ret = 0;
pr_debug("%s: disabling PTCAL\n", __func__);
list_for_each_entry_safe(area, tmp, &ptcal_list, list) {
/* disable ptcal on this node */
if (rtas_call(ptcal_stop_tok, 1, 1, NULL, area->nid)) {
printk(KERN_ERR "%s: error disabling PTCAL "
"on node %d!\n", __func__,
area->nid);
ret = -EIO;
continue;
}
/* ensure we can access the PTCAL area */
memset(page_address(area->pages), 0,
1 << (area->order + PAGE_SHIFT));
/* clean up */
list_del(&area->list);
__free_pages(area->pages, area->order);
kfree(area);
}
return ret;
}
static int cbe_ptcal_notify_reboot(struct notifier_block *nb,
unsigned long code, void *data)
{
return cbe_ptcal_disable();
}
static void cbe_ptcal_crash_shutdown(void)
{
cbe_ptcal_disable();
}
static struct notifier_block cbe_ptcal_reboot_notifier = {
.notifier_call = cbe_ptcal_notify_reboot
};
#ifdef CONFIG_PPC_IBM_CELL_RESETBUTTON
static int sysreset_hack;
static int __init cbe_sysreset_init(void)
{
struct cbe_pmd_regs __iomem *regs;
sysreset_hack = of_machine_is_compatible("IBM,CBPLUS-1.0");
if (!sysreset_hack)
return 0;
regs = cbe_get_cpu_pmd_regs(0);
if (!regs)
return 0;
/* Enable JTAG system-reset hack */
out_be32(&regs->fir_mode_reg,
in_be32(&regs->fir_mode_reg) |
CBE_PMD_FIR_MODE_M8);
return 0;
}
device_initcall(cbe_sysreset_init);
int cbe_sysreset_hack(void)
{
struct cbe_pmd_regs __iomem *regs;
/*
* The BMC can inject user triggered system reset exceptions,
* but cannot set the system reset reason in srr1,
* so check an extra register here.
*/
if (sysreset_hack && (smp_processor_id() == 0)) {
regs = cbe_get_cpu_pmd_regs(0);
if (!regs)
return 0;
if (in_be64(&regs->ras_esc_0) & 0x0000ffff) {
out_be64(&regs->ras_esc_0, 0);
return 0;
}
}
return 1;
}
#endif /* CONFIG_PPC_IBM_CELL_RESETBUTTON */
static int __init cbe_ptcal_init(void)
{
int ret;
ptcal_start_tok = rtas_function_token(RTAS_FN_IBM_CBE_START_PTCAL);
ptcal_stop_tok = rtas_function_token(RTAS_FN_IBM_CBE_STOP_PTCAL);
if (ptcal_start_tok == RTAS_UNKNOWN_SERVICE
|| ptcal_stop_tok == RTAS_UNKNOWN_SERVICE)
return -ENODEV;
ret = register_reboot_notifier(&cbe_ptcal_reboot_notifier);
if (ret)
goto out1;
ret = crash_shutdown_register(&cbe_ptcal_crash_shutdown);
if (ret)
goto out2;
return cbe_ptcal_enable();
out2:
unregister_reboot_notifier(&cbe_ptcal_reboot_notifier);
out1:
printk(KERN_ERR "Can't disable PTCAL, so not enabling\n");
return ret;
}
arch_initcall(cbe_ptcal_init);
void __init cbe_ras_init(void)
{
unsigned long hid0;
/*
* Enable System Error & thermal interrupts and wakeup conditions
*/
hid0 = mfspr(SPRN_HID0);
hid0 |= HID0_CBE_THERM_INT_EN | HID0_CBE_THERM_WAKEUP |
HID0_CBE_SYSERR_INT_EN | HID0_CBE_SYSERR_WAKEUP;
mtspr(SPRN_HID0, hid0);
mb();
/*
* Install machine check handler. Leave setting of precise mode to
* what the firmware did for now
*/
ppc_md.machine_check_exception = cbe_machine_check_handler;
mb();
/*
* For now, we assume that IOC_FIR is already set to forward some
* error conditions to the System Error handler. If that is not true
* then it will have to be fixed up here.
*/
}

@ -1,13 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef RAS_H
#define RAS_H
#include <asm/interrupt.h>
DECLARE_INTERRUPT_HANDLER(cbe_system_error_exception);
DECLARE_INTERRUPT_HANDLER(cbe_maintenance_exception);
DECLARE_INTERRUPT_HANDLER(cbe_thermal_exception);
extern void cbe_ras_init(void);
#endif /* RAS_H */

@ -1,274 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* linux/arch/powerpc/platforms/cell/cell_setup.c
*
* Copyright (C) 1995 Linus Torvalds
* Adapted from 'alpha' version by Gary Thomas
* Modified by Cort Dougan (cort@cs.nmt.edu)
* Modified by PPC64 Team, IBM Corp
* Modified by Cell Team, IBM Deutschland Entwicklung GmbH
*/
#undef DEBUG
#include <linux/sched.h>
#include <linux/kernel.h>
#include <linux/mm.h>
#include <linux/stddef.h>
#include <linux/export.h>
#include <linux/unistd.h>
#include <linux/user.h>
#include <linux/reboot.h>
#include <linux/init.h>
#include <linux/delay.h>
#include <linux/irq.h>
#include <linux/seq_file.h>
#include <linux/root_dev.h>
#include <linux/console.h>
#include <linux/mutex.h>
#include <linux/memory_hotplug.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <asm/mmu.h>
#include <asm/processor.h>
#include <asm/io.h>
#include <asm/rtas.h>
#include <asm/pci-bridge.h>
#include <asm/iommu.h>
#include <asm/dma.h>
#include <asm/machdep.h>
#include <asm/time.h>
#include <asm/nvram.h>
#include <asm/cputable.h>
#include <asm/ppc-pci.h>
#include <asm/irq.h>
#include <asm/spu.h>
#include <asm/spu_priv1.h>
#include <asm/udbg.h>
#include <asm/mpic.h>
#include <asm/cell-regs.h>
#include <asm/io-workarounds.h>
#include "cell.h"
#include "interrupt.h"
#include "pervasive.h"
#include "ras.h"
#ifdef DEBUG
#define DBG(fmt...) udbg_printf(fmt)
#else
#define DBG(fmt...)
#endif
static void cell_show_cpuinfo(struct seq_file *m)
{
struct device_node *root;
const char *model = "";
root = of_find_node_by_path("/");
if (root)
model = of_get_property(root, "model", NULL);
seq_printf(m, "machine\t\t: CHRP %s\n", model);
of_node_put(root);
}
static void cell_progress(char *s, unsigned short hex)
{
printk("*** %04x : %s\n", hex, s ? s : "");
}
static void cell_fixup_pcie_rootcomplex(struct pci_dev *dev)
{
struct pci_controller *hose;
const char *s;
int i;
if (!machine_is(cell))
return;
/* We're searching for a direct child of the PHB */
if (dev->bus->self != NULL || dev->devfn != 0)
return;
hose = pci_bus_to_host(dev->bus);
if (hose == NULL)
return;
/* Only on PCIE */
if (!of_device_is_compatible(hose->dn, "pciex"))
return;
/* And only on axon */
s = of_get_property(hose->dn, "model", NULL);
if (!s || strcmp(s, "Axon") != 0)
return;
for (i = 0; i < PCI_BRIDGE_RESOURCES; i++) {
dev->resource[i].start = dev->resource[i].end = 0;
dev->resource[i].flags = 0;
}
printk(KERN_DEBUG "PCI: Hiding resources on Axon PCIE RC %s\n",
pci_name(dev));
}
DECLARE_PCI_FIXUP_HEADER(PCI_ANY_ID, PCI_ANY_ID, cell_fixup_pcie_rootcomplex);
static int cell_setup_phb(struct pci_controller *phb)
{
const char *model;
struct device_node *np;
int rc = rtas_setup_phb(phb);
if (rc)
return rc;
phb->controller_ops = cell_pci_controller_ops;
np = phb->dn;
model = of_get_property(np, "model", NULL);
if (model == NULL || !of_node_name_eq(np, "pci"))
return 0;
/* Setup workarounds for spider */
if (strcmp(model, "Spider"))
return 0;
iowa_register_bus(phb, &spiderpci_ops, &spiderpci_iowa_init,
(void *)SPIDER_PCI_REG_BASE);
return 0;
}
static const struct of_device_id cell_bus_ids[] __initconst = {
{ .type = "soc", },
{ .compatible = "soc", },
{ .type = "spider", },
{ .type = "axon", },
{ .type = "plb5", },
{ .type = "plb4", },
{ .type = "opb", },
{ .type = "ebc", },
{},
};
static int __init cell_publish_devices(void)
{
struct device_node *root = of_find_node_by_path("/");
struct device_node *np;
int node;
/* Publish OF platform devices for southbridge IOs */
of_platform_bus_probe(NULL, cell_bus_ids, NULL);
/* On spider based blades, we need to manually create the OF
* platform devices for the PCI host bridges
*/
for_each_child_of_node(root, np) {
if (!of_node_is_type(np, "pci") && !of_node_is_type(np, "pciex"))
continue;
of_platform_device_create(np, NULL, NULL);
}
of_node_put(root);
/* There is no device for the MIC memory controller, thus we create
* a platform device for it to attach the EDAC driver to.
*/
for_each_online_node(node) {
if (cbe_get_cpu_mic_tm_regs(cbe_node_to_cpu(node)) == NULL)
continue;
platform_device_register_simple("cbe-mic", node, NULL, 0);
}
return 0;
}
machine_subsys_initcall(cell, cell_publish_devices);
static void __init mpic_init_IRQ(void)
{
struct device_node *dn;
struct mpic *mpic;
for_each_node_by_name(dn, "interrupt-controller") {
if (!of_device_is_compatible(dn, "CBEA,platform-open-pic"))
continue;
/* The MPIC driver will get everything it needs from the
* device-tree, just pass 0 to all arguments
*/
mpic = mpic_alloc(dn, 0, MPIC_SECONDARY | MPIC_NO_RESET,
0, 0, " MPIC ");
if (mpic == NULL)
continue;
mpic_init(mpic);
}
}
static void __init cell_init_irq(void)
{
iic_init_IRQ();
spider_init_IRQ();
mpic_init_IRQ();
}
static void __init cell_set_dabrx(void)
{
mtspr(SPRN_DABRX, DABRX_KERNEL | DABRX_USER);
}
static void __init cell_setup_arch(void)
{
#ifdef CONFIG_SPU_BASE
spu_priv1_ops = &spu_priv1_mmio_ops;
spu_management_ops = &spu_management_of_ops;
#endif
cbe_regs_init();
cell_set_dabrx();
#ifdef CONFIG_CBE_RAS
cbe_ras_init();
#endif
#ifdef CONFIG_SMP
smp_init_cell();
#endif
/* init to some ~sane value until calibrate_delay() runs */
loops_per_jiffy = 50000000;
/* Find and initialize PCI host bridges */
init_pci_config_tokens();
cbe_pervasive_init();
mmio_nvram_init();
}
static int __init cell_probe(void)
{
if (!of_machine_is_compatible("IBM,CBEA") &&
!of_machine_is_compatible("IBM,CPBW-1.0"))
return 0;
pm_power_off = rtas_power_off;
return 1;
}
define_machine(cell) {
.name = "Cell",
.probe = cell_probe,
.setup_arch = cell_setup_arch,
.show_cpuinfo = cell_show_cpuinfo,
.restart = rtas_restart,
.halt = rtas_halt,
.get_boot_time = rtas_get_boot_time,
.get_rtc_time = rtas_get_rtc_time,
.set_rtc_time = rtas_set_rtc_time,
.progress = cell_progress,
.init_IRQ = cell_init_irq,
.pci_setup_phb = cell_setup_phb,
};
struct pci_controller_ops cell_pci_controller_ops;

@ -1,162 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* SMP support for BPA machines.
*
* Dave Engebretsen, Peter Bergner, and
* Mike Corrigan {engebret|bergner|mikec}@us.ibm.com
*
* Plus various changes from other IBM teams...
*/
#undef DEBUG
#include <linux/kernel.h>
#include <linux/sched.h>
#include <linux/smp.h>
#include <linux/interrupt.h>
#include <linux/delay.h>
#include <linux/init.h>
#include <linux/spinlock.h>
#include <linux/cache.h>
#include <linux/err.h>
#include <linux/device.h>
#include <linux/cpu.h>
#include <linux/pgtable.h>
#include <asm/ptrace.h>
#include <linux/atomic.h>
#include <asm/irq.h>
#include <asm/page.h>
#include <asm/io.h>
#include <asm/smp.h>
#include <asm/paca.h>
#include <asm/machdep.h>
#include <asm/cputable.h>
#include <asm/firmware.h>
#include <asm/rtas.h>
#include <asm/cputhreads.h>
#include <asm/text-patching.h>
#include "interrupt.h"
#include <asm/udbg.h>
#ifdef DEBUG
#define DBG(fmt...) udbg_printf(fmt)
#else
#define DBG(fmt...)
#endif
/*
* The Primary thread of each non-boot processor was started from the OF client
* interface by prom_hold_cpus and is spinning on secondary_hold_spinloop.
*/
static cpumask_t of_spin_map;
/**
* smp_startup_cpu() - start the given cpu
* @lcpu: Logical CPU ID of the CPU to be started.
*
* At boot time, there is nothing to do for primary threads which were
* started from Open Firmware. For anything else, call RTAS with the
* appropriate start location.
*
* Returns:
* 0 - failure
* 1 - success
*/
static inline int smp_startup_cpu(unsigned int lcpu)
{
int status;
unsigned long start_here =
__pa(ppc_function_entry(generic_secondary_smp_init));
unsigned int pcpu;
int start_cpu;
if (cpumask_test_cpu(lcpu, &of_spin_map))
/* Already started by OF and sitting in spin loop */
return 1;
pcpu = get_hard_smp_processor_id(lcpu);
/*
* If the RTAS start-cpu token does not exist then presume the
* cpu is already spinning.
*/
start_cpu = rtas_function_token(RTAS_FN_START_CPU);
if (start_cpu == RTAS_UNKNOWN_SERVICE)
return 1;
status = rtas_call(start_cpu, 3, 1, NULL, pcpu, start_here, lcpu);
if (status != 0) {
printk(KERN_ERR "start-cpu failed: %i\n", status);
return 0;
}
return 1;
}
static void smp_cell_setup_cpu(int cpu)
{
if (cpu != boot_cpuid)
iic_setup_cpu();
/*
* change default DABRX to allow user watchpoints
*/
mtspr(SPRN_DABRX, DABRX_KERNEL | DABRX_USER);
}
static int smp_cell_kick_cpu(int nr)
{
if (nr < 0 || nr >= nr_cpu_ids)
return -EINVAL;
if (!smp_startup_cpu(nr))
return -ENOENT;
/*
* The processor is currently spinning, waiting for the
* cpu_start field to become non-zero After we set cpu_start,
* the processor will continue on to secondary_start
*/
paca_ptrs[nr]->cpu_start = 1;
return 0;
}
static struct smp_ops_t bpa_iic_smp_ops = {
.message_pass = iic_message_pass,
.probe = iic_request_IPIs,
.kick_cpu = smp_cell_kick_cpu,
.setup_cpu = smp_cell_setup_cpu,
.cpu_bootable = smp_generic_cpu_bootable,
};
/* This is called very early */
void __init smp_init_cell(void)
{
int i;
DBG(" -> smp_init_cell()\n");
smp_ops = &bpa_iic_smp_ops;
/* Mark threads which are still spinning in hold loops. */
if (cpu_has_feature(CPU_FTR_SMT)) {
for_each_present_cpu(i) {
if (cpu_thread_in_core(i) == 0)
cpumask_set_cpu(i, &of_spin_map);
}
} else
cpumask_copy(&of_spin_map, cpu_present_mask);
cpumask_clear_cpu(boot_cpuid, &of_spin_map);
/* Non-lpar has additional take/give timebase */
if (rtas_function_token(RTAS_FN_FREEZE_TIME_BASE) != RTAS_UNKNOWN_SERVICE) {
smp_ops->give_timebase = rtas_give_timebase;
smp_ops->take_timebase = rtas_take_timebase;
}
DBG(" <- smp_init_cell()\n");
}

@ -1,170 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* IO workarounds for PCI on Celleb/Cell platform
*
* (C) Copyright 2006-2007 TOSHIBA CORPORATION
*/
#undef DEBUG
#include <linux/kernel.h>
#include <linux/of_address.h>
#include <linux/slab.h>
#include <linux/io.h>
#include <asm/ppc-pci.h>
#include <asm/pci-bridge.h>
#include <asm/io-workarounds.h>
#define SPIDER_PCI_DISABLE_PREFETCH
struct spiderpci_iowa_private {
void __iomem *regs;
};
static void spiderpci_io_flush(struct iowa_bus *bus)
{
struct spiderpci_iowa_private *priv;
priv = bus->private;
in_be32(priv->regs + SPIDER_PCI_DUMMY_READ);
iosync();
}
#define SPIDER_PCI_MMIO_READ(name, ret) \
static ret spiderpci_##name(const PCI_IO_ADDR addr) \
{ \
ret val = __do_##name(addr); \
spiderpci_io_flush(iowa_mem_find_bus(addr)); \
return val; \
}
#define SPIDER_PCI_MMIO_READ_STR(name) \
static void spiderpci_##name(const PCI_IO_ADDR addr, void *buf, \
unsigned long count) \
{ \
__do_##name(addr, buf, count); \
spiderpci_io_flush(iowa_mem_find_bus(addr)); \
}
SPIDER_PCI_MMIO_READ(readb, u8)
SPIDER_PCI_MMIO_READ(readw, u16)
SPIDER_PCI_MMIO_READ(readl, u32)
SPIDER_PCI_MMIO_READ(readq, u64)
SPIDER_PCI_MMIO_READ(readw_be, u16)
SPIDER_PCI_MMIO_READ(readl_be, u32)
SPIDER_PCI_MMIO_READ(readq_be, u64)
SPIDER_PCI_MMIO_READ_STR(readsb)
SPIDER_PCI_MMIO_READ_STR(readsw)
SPIDER_PCI_MMIO_READ_STR(readsl)
static void spiderpci_memcpy_fromio(void *dest, const PCI_IO_ADDR src,
unsigned long n)
{
__do_memcpy_fromio(dest, src, n);
spiderpci_io_flush(iowa_mem_find_bus(src));
}
static int __init spiderpci_pci_setup_chip(struct pci_controller *phb,
void __iomem *regs)
{
void *dummy_page_va;
dma_addr_t dummy_page_da;
#ifdef SPIDER_PCI_DISABLE_PREFETCH
u32 val = in_be32(regs + SPIDER_PCI_VCI_CNTL_STAT);
pr_debug("SPIDER_IOWA:PVCI_Control_Status was 0x%08x\n", val);
out_be32(regs + SPIDER_PCI_VCI_CNTL_STAT, val | 0x8);
#endif /* SPIDER_PCI_DISABLE_PREFETCH */
/* setup dummy read */
/*
* On CellBlade, we can't know that which XDR memory is used by
* kmalloc() to allocate dummy_page_va.
* In order to improve the performance, the XDR which is used to
* allocate dummy_page_va is the nearest the spider-pci.
* We have to select the CBE which is the nearest the spider-pci
* to allocate memory from the best XDR, but I don't know that
* how to do.
*
* Celleb does not have this problem, because it has only one XDR.
*/
dummy_page_va = kmalloc(PAGE_SIZE, GFP_KERNEL);
if (!dummy_page_va) {
pr_err("SPIDERPCI-IOWA:Alloc dummy_page_va failed.\n");
return -1;
}
dummy_page_da = dma_map_single(phb->parent, dummy_page_va,
PAGE_SIZE, DMA_FROM_DEVICE);
if (dma_mapping_error(phb->parent, dummy_page_da)) {
pr_err("SPIDER-IOWA:Map dummy page filed.\n");
kfree(dummy_page_va);
return -1;
}
out_be32(regs + SPIDER_PCI_DUMMY_READ_BASE, dummy_page_da);
return 0;
}
int __init spiderpci_iowa_init(struct iowa_bus *bus, void *data)
{
void __iomem *regs = NULL;
struct spiderpci_iowa_private *priv;
struct device_node *np = bus->phb->dn;
struct resource r;
unsigned long offset = (unsigned long)data;
pr_debug("SPIDERPCI-IOWA:Bus initialize for spider(%pOF)\n",
np);
priv = kzalloc(sizeof(*priv), GFP_KERNEL);
if (!priv) {
pr_err("SPIDERPCI-IOWA:"
"Can't allocate struct spiderpci_iowa_private");
return -1;
}
if (of_address_to_resource(np, 0, &r)) {
pr_err("SPIDERPCI-IOWA:Can't get resource.\n");
goto error;
}
regs = ioremap(r.start + offset, SPIDER_PCI_REG_SIZE);
if (!regs) {
pr_err("SPIDERPCI-IOWA:ioremap failed.\n");
goto error;
}
priv->regs = regs;
bus->private = priv;
if (spiderpci_pci_setup_chip(bus->phb, regs))
goto error;
return 0;
error:
kfree(priv);
bus->private = NULL;
if (regs)
iounmap(regs);
return -1;
}
struct ppc_pci_io spiderpci_ops = {
.readb = spiderpci_readb,
.readw = spiderpci_readw,
.readl = spiderpci_readl,
.readq = spiderpci_readq,
.readw_be = spiderpci_readw_be,
.readl_be = spiderpci_readl_be,
.readq_be = spiderpci_readq_be,
.readsb = spiderpci_readsb,
.readsw = spiderpci_readsw,
.readsl = spiderpci_readsl,
.memcpy_fromio = spiderpci_memcpy_fromio,
};

@ -1,344 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* External Interrupt Controller on Spider South Bridge
*
* (C) Copyright IBM Deutschland Entwicklung GmbH 2005
*
* Author: Arnd Bergmann <arndb@de.ibm.com>
*/
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/ioport.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include <linux/pgtable.h>
#include <asm/io.h>
#include "interrupt.h"
/* register layout taken from Spider spec, table 7.4-4 */
enum {
TIR_DEN = 0x004, /* Detection Enable Register */
TIR_MSK = 0x084, /* Mask Level Register */
TIR_EDC = 0x0c0, /* Edge Detection Clear Register */
TIR_PNDA = 0x100, /* Pending Register A */
TIR_PNDB = 0x104, /* Pending Register B */
TIR_CS = 0x144, /* Current Status Register */
TIR_LCSA = 0x150, /* Level Current Status Register A */
TIR_LCSB = 0x154, /* Level Current Status Register B */
TIR_LCSC = 0x158, /* Level Current Status Register C */
TIR_LCSD = 0x15c, /* Level Current Status Register D */
TIR_CFGA = 0x200, /* Setting Register A0 */
TIR_CFGB = 0x204, /* Setting Register B0 */
/* 0x208 ... 0x3ff Setting Register An/Bn */
TIR_PPNDA = 0x400, /* Packet Pending Register A */
TIR_PPNDB = 0x404, /* Packet Pending Register B */
TIR_PIERA = 0x408, /* Packet Output Error Register A */
TIR_PIERB = 0x40c, /* Packet Output Error Register B */
TIR_PIEN = 0x444, /* Packet Output Enable Register */
TIR_PIPND = 0x454, /* Packet Output Pending Register */
TIRDID = 0x484, /* Spider Device ID Register */
REISTIM = 0x500, /* Reissue Command Timeout Time Setting */
REISTIMEN = 0x504, /* Reissue Command Timeout Setting */
REISWAITEN = 0x508, /* Reissue Wait Control*/
};
#define SPIDER_CHIP_COUNT 4
#define SPIDER_SRC_COUNT 64
#define SPIDER_IRQ_INVALID 63
struct spider_pic {
struct irq_domain *host;
void __iomem *regs;
unsigned int node_id;
};
static struct spider_pic spider_pics[SPIDER_CHIP_COUNT];
static struct spider_pic *spider_irq_data_to_pic(struct irq_data *d)
{
return irq_data_get_irq_chip_data(d);
}
static void __iomem *spider_get_irq_config(struct spider_pic *pic,
unsigned int src)
{
return pic->regs + TIR_CFGA + 8 * src;
}
static void spider_unmask_irq(struct irq_data *d)
{
struct spider_pic *pic = spider_irq_data_to_pic(d);
void __iomem *cfg = spider_get_irq_config(pic, irqd_to_hwirq(d));
out_be32(cfg, in_be32(cfg) | 0x30000000u);
}
static void spider_mask_irq(struct irq_data *d)
{
struct spider_pic *pic = spider_irq_data_to_pic(d);
void __iomem *cfg = spider_get_irq_config(pic, irqd_to_hwirq(d));
out_be32(cfg, in_be32(cfg) & ~0x30000000u);
}
static void spider_ack_irq(struct irq_data *d)
{
struct spider_pic *pic = spider_irq_data_to_pic(d);
unsigned int src = irqd_to_hwirq(d);
/* Reset edge detection logic if necessary
*/
if (irqd_is_level_type(d))
return;
/* Only interrupts 47 to 50 can be set to edge */
if (src < 47 || src > 50)
return;
/* Perform the clear of the edge logic */
out_be32(pic->regs + TIR_EDC, 0x100 | (src & 0xf));
}
static int spider_set_irq_type(struct irq_data *d, unsigned int type)
{
unsigned int sense = type & IRQ_TYPE_SENSE_MASK;
struct spider_pic *pic = spider_irq_data_to_pic(d);
unsigned int hw = irqd_to_hwirq(d);
void __iomem *cfg = spider_get_irq_config(pic, hw);
u32 old_mask;
u32 ic;
/* Note that only level high is supported for most interrupts */
if (sense != IRQ_TYPE_NONE && sense != IRQ_TYPE_LEVEL_HIGH &&
(hw < 47 || hw > 50))
return -EINVAL;
/* Decode sense type */
switch(sense) {
case IRQ_TYPE_EDGE_RISING:
ic = 0x3;
break;
case IRQ_TYPE_EDGE_FALLING:
ic = 0x2;
break;
case IRQ_TYPE_LEVEL_LOW:
ic = 0x0;
break;
case IRQ_TYPE_LEVEL_HIGH:
case IRQ_TYPE_NONE:
ic = 0x1;
break;
default:
return -EINVAL;
}
/* Configure the source. One gross hack that was there before and
* that I've kept around is the priority to the BE which I set to
* be the same as the interrupt source number. I don't know whether
* that's supposed to make any kind of sense however, we'll have to
* decide that, but for now, I'm not changing the behaviour.
*/
old_mask = in_be32(cfg) & 0x30000000u;
out_be32(cfg, old_mask | (ic << 24) | (0x7 << 16) |
(pic->node_id << 4) | 0xe);
out_be32(cfg + 4, (0x2 << 16) | (hw & 0xff));
return 0;
}
static struct irq_chip spider_pic = {
.name = "SPIDER",
.irq_unmask = spider_unmask_irq,
.irq_mask = spider_mask_irq,
.irq_ack = spider_ack_irq,
.irq_set_type = spider_set_irq_type,
};
static int spider_host_map(struct irq_domain *h, unsigned int virq,
irq_hw_number_t hw)
{
irq_set_chip_data(virq, h->host_data);
irq_set_chip_and_handler(virq, &spider_pic, handle_level_irq);
/* Set default irq type */
irq_set_irq_type(virq, IRQ_TYPE_NONE);
return 0;
}
static int spider_host_xlate(struct irq_domain *h, struct device_node *ct,
const u32 *intspec, unsigned int intsize,
irq_hw_number_t *out_hwirq, unsigned int *out_flags)
{
/* Spider interrupts have 2 cells, first is the interrupt source,
* second, well, I don't know for sure yet ... We mask the top bits
* because old device-trees encode a node number in there
*/
*out_hwirq = intspec[0] & 0x3f;
*out_flags = IRQ_TYPE_LEVEL_HIGH;
return 0;
}
static const struct irq_domain_ops spider_host_ops = {
.map = spider_host_map,
.xlate = spider_host_xlate,
};
static void spider_irq_cascade(struct irq_desc *desc)
{
struct irq_chip *chip = irq_desc_get_chip(desc);
struct spider_pic *pic = irq_desc_get_handler_data(desc);
unsigned int cs;
cs = in_be32(pic->regs + TIR_CS) >> 24;
if (cs != SPIDER_IRQ_INVALID)
generic_handle_domain_irq(pic->host, cs);
chip->irq_eoi(&desc->irq_data);
}
/* For hooking up the cascade we have a problem. Our device-tree is
* crap and we don't know on which BE iic interrupt we are hooked on at
* least not the "standard" way. We can reconstitute it based on two
* informations though: which BE node we are connected to and whether
* we are connected to IOIF0 or IOIF1. Right now, we really only care
* about the IBM cell blade and we know that its firmware gives us an
* interrupt-map property which is pretty strange.
*/
static unsigned int __init spider_find_cascade_and_node(struct spider_pic *pic)
{
unsigned int virq;
const u32 *imap, *tmp;
int imaplen, intsize, unit;
struct device_node *iic;
struct device_node *of_node;
of_node = irq_domain_get_of_node(pic->host);
/* First, we check whether we have a real "interrupts" in the device
* tree in case the device-tree is ever fixed
*/
virq = irq_of_parse_and_map(of_node, 0);
if (virq)
return virq;
/* Now do the horrible hacks */
tmp = of_get_property(of_node, "#interrupt-cells", NULL);
if (tmp == NULL)
return 0;
intsize = *tmp;
imap = of_get_property(of_node, "interrupt-map", &imaplen);
if (imap == NULL || imaplen < (intsize + 1))
return 0;
iic = of_find_node_by_phandle(imap[intsize]);
if (iic == NULL)
return 0;
imap += intsize + 1;
tmp = of_get_property(iic, "#interrupt-cells", NULL);
if (tmp == NULL) {
of_node_put(iic);
return 0;
}
intsize = *tmp;
/* Assume unit is last entry of interrupt specifier */
unit = imap[intsize - 1];
/* Ok, we have a unit, now let's try to get the node */
tmp = of_get_property(iic, "ibm,interrupt-server-ranges", NULL);
if (tmp == NULL) {
of_node_put(iic);
return 0;
}
/* ugly as hell but works for now */
pic->node_id = (*tmp) >> 1;
of_node_put(iic);
/* Ok, now let's get cracking. You may ask me why I just didn't match
* the iic host from the iic OF node, but that way I'm still compatible
* with really really old old firmwares for which we don't have a node
*/
/* Manufacture an IIC interrupt number of class 2 */
virq = irq_create_mapping(NULL,
(pic->node_id << IIC_IRQ_NODE_SHIFT) |
(2 << IIC_IRQ_CLASS_SHIFT) |
unit);
if (!virq)
printk(KERN_ERR "spider_pic: failed to map cascade !");
return virq;
}
static void __init spider_init_one(struct device_node *of_node, int chip,
unsigned long addr)
{
struct spider_pic *pic = &spider_pics[chip];
int i, virq;
/* Map registers */
pic->regs = ioremap(addr, 0x1000);
if (pic->regs == NULL)
panic("spider_pic: can't map registers !");
/* Allocate a host */
pic->host = irq_domain_add_linear(of_node, SPIDER_SRC_COUNT,
&spider_host_ops, pic);
if (pic->host == NULL)
panic("spider_pic: can't allocate irq host !");
/* Go through all sources and disable them */
for (i = 0; i < SPIDER_SRC_COUNT; i++) {
void __iomem *cfg = pic->regs + TIR_CFGA + 8 * i;
out_be32(cfg, in_be32(cfg) & ~0x30000000u);
}
/* do not mask any interrupts because of level */
out_be32(pic->regs + TIR_MSK, 0x0);
/* enable interrupt packets to be output */
out_be32(pic->regs + TIR_PIEN, in_be32(pic->regs + TIR_PIEN) | 0x1);
/* Hook up the cascade interrupt to the iic and nodeid */
virq = spider_find_cascade_and_node(pic);
if (!virq)
return;
irq_set_handler_data(virq, pic);
irq_set_chained_handler(virq, spider_irq_cascade);
printk(KERN_INFO "spider_pic: node %d, addr: 0x%lx %pOF\n",
pic->node_id, addr, of_node);
/* Enable the interrupt detection enable bit. Do this last! */
out_be32(pic->regs + TIR_DEN, in_be32(pic->regs + TIR_DEN) | 0x1);
}
void __init spider_init_IRQ(void)
{
struct resource r;
struct device_node *dn;
int chip = 0;
/* XXX node numbers are totally bogus. We _hope_ we get the device
* nodes in the right order here but that's definitely not guaranteed,
* we need to get the node from the device tree instead.
* There is currently no proper property for it (but our whole
* device-tree is bogus anyway) so all we can do is pray or maybe test
* the address and deduce the node-id
*/
for_each_node_by_name(dn, "interrupt-controller") {
if (of_device_is_compatible(dn, "CBEA,platform-spider-pic")) {
if (of_address_to_resource(dn, 0, &r)) {
printk(KERN_WARNING "spider-pic: Failed\n");
continue;
}
} else if (of_device_is_compatible(dn, "sti,platform-spider-pic")
&& (chip < 2)) {
static long hard_coded_pics[] =
{ 0x24000008000ul, 0x34000008000ul};
r.start = hard_coded_pics[chip];
} else
continue;
spider_init_one(dn, chip++, r.start);
}
}

@ -23,7 +23,6 @@
#include <asm/spu.h>
#include <asm/spu_priv1.h>
#include <asm/spu_csa.h>
#include <asm/xmon.h>
#include <asm/kexec.h>
const struct spu_management_ops *spu_management_ops;
@ -772,7 +771,6 @@ static int __init init_spu_base(void)
fb_append_extra_logo(&logo_spe_clut224, ret);
mutex_lock(&spu_full_list_mutex);
xmon_register_spus(&spu_full_list);
crash_register_spus(&spu_full_list);
mutex_unlock(&spu_full_list_mutex);
spu_add_dev_attr(&dev_attr_stat);

@ -1,530 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* spu management operations for of based platforms
*
* (C) Copyright IBM Deutschland Entwicklung GmbH 2005
* Copyright 2006 Sony Corp.
* (C) Copyright 2007 TOSHIBA CORPORATION
*/
#include <linux/interrupt.h>
#include <linux/list.h>
#include <linux/export.h>
#include <linux/ptrace.h>
#include <linux/wait.h>
#include <linux/mm.h>
#include <linux/io.h>
#include <linux/mutex.h>
#include <linux/device.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include <asm/spu.h>
#include <asm/spu_priv1.h>
#include <asm/firmware.h>
#include "spufs/spufs.h"
#include "interrupt.h"
#include "spu_priv1_mmio.h"
struct device_node *spu_devnode(struct spu *spu)
{
return spu->devnode;
}
EXPORT_SYMBOL_GPL(spu_devnode);
static u64 __init find_spu_unit_number(struct device_node *spe)
{
const unsigned int *prop;
int proplen;
/* new device trees should provide the physical-id attribute */
prop = of_get_property(spe, "physical-id", &proplen);
if (proplen == 4)
return (u64)*prop;
/* celleb device tree provides the unit-id */
prop = of_get_property(spe, "unit-id", &proplen);
if (proplen == 4)
return (u64)*prop;
/* legacy device trees provide the id in the reg attribute */
prop = of_get_property(spe, "reg", &proplen);
if (proplen == 4)
return (u64)*prop;
return 0;
}
static void spu_unmap(struct spu *spu)
{
if (!firmware_has_feature(FW_FEATURE_LPAR))
iounmap(spu->priv1);
iounmap(spu->priv2);
iounmap(spu->problem);
iounmap((__force u8 __iomem *)spu->local_store);
}
static int __init spu_map_interrupts_old(struct spu *spu,
struct device_node *np)
{
unsigned int isrc;
const u32 *tmp;
int nid;
/* Get the interrupt source unit from the device-tree */
tmp = of_get_property(np, "isrc", NULL);
if (!tmp)
return -ENODEV;
isrc = tmp[0];
tmp = of_get_property(np->parent->parent, "node-id", NULL);
if (!tmp) {
printk(KERN_WARNING "%s: can't find node-id\n", __func__);
nid = spu->node;
} else
nid = tmp[0];
/* Add the node number */
isrc |= nid << IIC_IRQ_NODE_SHIFT;
/* Now map interrupts of all 3 classes */
spu->irqs[0] = irq_create_mapping(NULL, IIC_IRQ_CLASS_0 | isrc);
spu->irqs[1] = irq_create_mapping(NULL, IIC_IRQ_CLASS_1 | isrc);
spu->irqs[2] = irq_create_mapping(NULL, IIC_IRQ_CLASS_2 | isrc);
/* Right now, we only fail if class 2 failed */
if (!spu->irqs[2])
return -EINVAL;
return 0;
}
static void __iomem * __init spu_map_prop_old(struct spu *spu,
struct device_node *n,
const char *name)
{
const struct address_prop {
unsigned long address;
unsigned int len;
} __attribute__((packed)) *prop;
int proplen;
prop = of_get_property(n, name, &proplen);
if (prop == NULL || proplen != sizeof (struct address_prop))
return NULL;
return ioremap(prop->address, prop->len);
}
static int __init spu_map_device_old(struct spu *spu)
{
struct device_node *node = spu->devnode;
const char *prop;
int ret;
ret = -ENODEV;
spu->name = of_get_property(node, "name", NULL);
if (!spu->name)
goto out;
prop = of_get_property(node, "local-store", NULL);
if (!prop)
goto out;
spu->local_store_phys = *(unsigned long *)prop;
/* we use local store as ram, not io memory */
spu->local_store = (void __force *)
spu_map_prop_old(spu, node, "local-store");
if (!spu->local_store)
goto out;
prop = of_get_property(node, "problem", NULL);
if (!prop)
goto out_unmap;
spu->problem_phys = *(unsigned long *)prop;
spu->problem = spu_map_prop_old(spu, node, "problem");
if (!spu->problem)
goto out_unmap;
spu->priv2 = spu_map_prop_old(spu, node, "priv2");
if (!spu->priv2)
goto out_unmap;
if (!firmware_has_feature(FW_FEATURE_LPAR)) {
spu->priv1 = spu_map_prop_old(spu, node, "priv1");
if (!spu->priv1)
goto out_unmap;
}
ret = 0;
goto out;
out_unmap:
spu_unmap(spu);
out:
return ret;
}
static int __init spu_map_interrupts(struct spu *spu, struct device_node *np)
{
int i;
for (i=0; i < 3; i++) {
spu->irqs[i] = irq_of_parse_and_map(np, i);
if (!spu->irqs[i])
goto err;
}
return 0;
err:
pr_debug("failed to map irq %x for spu %s\n", i, spu->name);
for (; i >= 0; i--) {
if (spu->irqs[i])
irq_dispose_mapping(spu->irqs[i]);
}
return -EINVAL;
}
static int __init spu_map_resource(struct spu *spu, int nr,
void __iomem** virt, unsigned long *phys)
{
struct device_node *np = spu->devnode;
struct resource resource = { };
unsigned long len;
int ret;
ret = of_address_to_resource(np, nr, &resource);
if (ret)
return ret;
if (phys)
*phys = resource.start;
len = resource_size(&resource);
*virt = ioremap(resource.start, len);
if (!*virt)
return -EINVAL;
return 0;
}
static int __init spu_map_device(struct spu *spu)
{
struct device_node *np = spu->devnode;
int ret = -ENODEV;
spu->name = of_get_property(np, "name", NULL);
if (!spu->name)
goto out;
ret = spu_map_resource(spu, 0, (void __iomem**)&spu->local_store,
&spu->local_store_phys);
if (ret) {
pr_debug("spu_new: failed to map %pOF resource 0\n",
np);
goto out;
}
ret = spu_map_resource(spu, 1, (void __iomem**)&spu->problem,
&spu->problem_phys);
if (ret) {
pr_debug("spu_new: failed to map %pOF resource 1\n",
np);
goto out_unmap;
}
ret = spu_map_resource(spu, 2, (void __iomem**)&spu->priv2, NULL);
if (ret) {
pr_debug("spu_new: failed to map %pOF resource 2\n",
np);
goto out_unmap;
}
if (!firmware_has_feature(FW_FEATURE_LPAR))
ret = spu_map_resource(spu, 3,
(void __iomem**)&spu->priv1, NULL);
if (ret) {
pr_debug("spu_new: failed to map %pOF resource 3\n",
np);
goto out_unmap;
}
pr_debug("spu_new: %pOF maps:\n", np);
pr_debug(" local store : 0x%016lx -> 0x%p\n",
spu->local_store_phys, spu->local_store);
pr_debug(" problem state : 0x%016lx -> 0x%p\n",
spu->problem_phys, spu->problem);
pr_debug(" priv2 : 0x%p\n", spu->priv2);
pr_debug(" priv1 : 0x%p\n", spu->priv1);
return 0;
out_unmap:
spu_unmap(spu);
out:
pr_debug("failed to map spe %s: %d\n", spu->name, ret);
return ret;
}
static int __init of_enumerate_spus(int (*fn)(void *data))
{
int ret;
struct device_node *node;
unsigned int n = 0;
ret = -ENODEV;
for_each_node_by_type(node, "spe") {
ret = fn(node);
if (ret) {
printk(KERN_WARNING "%s: Error initializing %pOFn\n",
__func__, node);
of_node_put(node);
break;
}
n++;
}
return ret ? ret : n;
}
static int __init of_create_spu(struct spu *spu, void *data)
{
int ret;
struct device_node *spe = (struct device_node *)data;
static int legacy_map = 0, legacy_irq = 0;
spu->devnode = of_node_get(spe);
spu->spe_id = find_spu_unit_number(spe);
spu->node = of_node_to_nid(spe);
if (spu->node >= MAX_NUMNODES) {
printk(KERN_WARNING "SPE %pOF on node %d ignored,"
" node number too big\n", spe, spu->node);
printk(KERN_WARNING "Check if CONFIG_NUMA is enabled.\n");
ret = -ENODEV;
goto out;
}
ret = spu_map_device(spu);
if (ret) {
if (!legacy_map) {
legacy_map = 1;
printk(KERN_WARNING "%s: Legacy device tree found, "
"trying to map old style\n", __func__);
}
ret = spu_map_device_old(spu);
if (ret) {
printk(KERN_ERR "Unable to map %s\n",
spu->name);
goto out;
}
}
ret = spu_map_interrupts(spu, spe);
if (ret) {
if (!legacy_irq) {
legacy_irq = 1;
printk(KERN_WARNING "%s: Legacy device tree found, "
"trying old style irq\n", __func__);
}
ret = spu_map_interrupts_old(spu, spe);
if (ret) {
printk(KERN_ERR "%s: could not map interrupts\n",
spu->name);
goto out_unmap;
}
}
pr_debug("Using SPE %s %p %p %p %p %d\n", spu->name,
spu->local_store, spu->problem, spu->priv1,
spu->priv2, spu->number);
goto out;
out_unmap:
spu_unmap(spu);
out:
return ret;
}
static int of_destroy_spu(struct spu *spu)
{
spu_unmap(spu);
of_node_put(spu->devnode);
return 0;
}
static void enable_spu_by_master_run(struct spu_context *ctx)
{
ctx->ops->master_start(ctx);
}
static void disable_spu_by_master_run(struct spu_context *ctx)
{
ctx->ops->master_stop(ctx);
}
/* Hardcoded affinity idxs for qs20 */
#define QS20_SPES_PER_BE 8
static int qs20_reg_idxs[QS20_SPES_PER_BE] = { 0, 2, 4, 6, 7, 5, 3, 1 };
static int qs20_reg_memory[QS20_SPES_PER_BE] = { 1, 1, 0, 0, 0, 0, 0, 0 };
static struct spu *__init spu_lookup_reg(int node, u32 reg)
{
struct spu *spu;
const u32 *spu_reg;
list_for_each_entry(spu, &cbe_spu_info[node].spus, cbe_list) {
spu_reg = of_get_property(spu_devnode(spu), "reg", NULL);
if (*spu_reg == reg)
return spu;
}
return NULL;
}
static void __init init_affinity_qs20_harcoded(void)
{
int node, i;
struct spu *last_spu, *spu;
u32 reg;
for (node = 0; node < MAX_NUMNODES; node++) {
last_spu = NULL;
for (i = 0; i < QS20_SPES_PER_BE; i++) {
reg = qs20_reg_idxs[i];
spu = spu_lookup_reg(node, reg);
if (!spu)
continue;
spu->has_mem_affinity = qs20_reg_memory[reg];
if (last_spu)
list_add_tail(&spu->aff_list,
&last_spu->aff_list);
last_spu = spu;
}
}
}
static int __init of_has_vicinity(void)
{
struct device_node *dn;
for_each_node_by_type(dn, "spe") {
if (of_property_present(dn, "vicinity")) {
of_node_put(dn);
return 1;
}
}
return 0;
}
static struct spu *__init devnode_spu(int cbe, struct device_node *dn)
{
struct spu *spu;
list_for_each_entry(spu, &cbe_spu_info[cbe].spus, cbe_list)
if (spu_devnode(spu) == dn)
return spu;
return NULL;
}
static struct spu * __init
neighbour_spu(int cbe, struct device_node *target, struct device_node *avoid)
{
struct spu *spu;
struct device_node *spu_dn;
const phandle *vic_handles;
int lenp, i;
list_for_each_entry(spu, &cbe_spu_info[cbe].spus, cbe_list) {
spu_dn = spu_devnode(spu);
if (spu_dn == avoid)
continue;
vic_handles = of_get_property(spu_dn, "vicinity", &lenp);
for (i=0; i < (lenp / sizeof(phandle)); i++) {
if (vic_handles[i] == target->phandle)
return spu;
}
}
return NULL;
}
static void __init init_affinity_node(int cbe)
{
struct spu *spu, *last_spu;
struct device_node *vic_dn, *last_spu_dn;
phandle avoid_ph;
const phandle *vic_handles;
int lenp, i, added;
last_spu = list_first_entry(&cbe_spu_info[cbe].spus, struct spu,
cbe_list);
avoid_ph = 0;
for (added = 1; added < cbe_spu_info[cbe].n_spus; added++) {
last_spu_dn = spu_devnode(last_spu);
vic_handles = of_get_property(last_spu_dn, "vicinity", &lenp);
/*
* Walk through each phandle in vicinity property of the spu
* (typically two vicinity phandles per spe node)
*/
for (i = 0; i < (lenp / sizeof(phandle)); i++) {
if (vic_handles[i] == avoid_ph)
continue;
vic_dn = of_find_node_by_phandle(vic_handles[i]);
if (!vic_dn)
continue;
if (of_node_name_eq(vic_dn, "spe") ) {
spu = devnode_spu(cbe, vic_dn);
avoid_ph = last_spu_dn->phandle;
} else {
/*
* "mic-tm" and "bif0" nodes do not have
* vicinity property. So we need to find the
* spe which has vic_dn as neighbour, but
* skipping the one we came from (last_spu_dn)
*/
spu = neighbour_spu(cbe, vic_dn, last_spu_dn);
if (!spu)
continue;
if (of_node_name_eq(vic_dn, "mic-tm")) {
last_spu->has_mem_affinity = 1;
spu->has_mem_affinity = 1;
}
avoid_ph = vic_dn->phandle;
}
of_node_put(vic_dn);
list_add_tail(&spu->aff_list, &last_spu->aff_list);
last_spu = spu;
break;
}
}
}
static void __init init_affinity_fw(void)
{
int cbe;
for (cbe = 0; cbe < MAX_NUMNODES; cbe++)
init_affinity_node(cbe);
}
static int __init init_affinity(void)
{
if (of_has_vicinity()) {
init_affinity_fw();
} else {
if (of_machine_is_compatible("IBM,CPBW-1.0"))
init_affinity_qs20_harcoded();
else
printk("No affinity configuration found\n");
}
return 0;
}
const struct spu_management_ops spu_management_of_ops = {
.enumerate_spus = of_enumerate_spus,
.create_spu = of_create_spu,
.destroy_spu = of_destroy_spu,
.enable_spu = enable_spu_by_master_run,
.disable_spu = disable_spu_by_master_run,
.init_affinity = init_affinity,
};

@ -1,167 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* spu hypervisor abstraction for direct hardware access.
*
* (C) Copyright IBM Deutschland Entwicklung GmbH 2005
* Copyright 2006 Sony Corp.
*/
#include <linux/interrupt.h>
#include <linux/list.h>
#include <linux/ptrace.h>
#include <linux/wait.h>
#include <linux/mm.h>
#include <linux/io.h>
#include <linux/mutex.h>
#include <linux/device.h>
#include <linux/sched.h>
#include <asm/spu.h>
#include <asm/spu_priv1.h>
#include <asm/firmware.h>
#include "interrupt.h"
#include "spu_priv1_mmio.h"
static void int_mask_and(struct spu *spu, int class, u64 mask)
{
u64 old_mask;
old_mask = in_be64(&spu->priv1->int_mask_RW[class]);
out_be64(&spu->priv1->int_mask_RW[class], old_mask & mask);
}
static void int_mask_or(struct spu *spu, int class, u64 mask)
{
u64 old_mask;
old_mask = in_be64(&spu->priv1->int_mask_RW[class]);
out_be64(&spu->priv1->int_mask_RW[class], old_mask | mask);
}
static void int_mask_set(struct spu *spu, int class, u64 mask)
{
out_be64(&spu->priv1->int_mask_RW[class], mask);
}
static u64 int_mask_get(struct spu *spu, int class)
{
return in_be64(&spu->priv1->int_mask_RW[class]);
}
static void int_stat_clear(struct spu *spu, int class, u64 stat)
{
out_be64(&spu->priv1->int_stat_RW[class], stat);
}
static u64 int_stat_get(struct spu *spu, int class)
{
return in_be64(&spu->priv1->int_stat_RW[class]);
}
static void cpu_affinity_set(struct spu *spu, int cpu)
{
u64 target;
u64 route;
if (nr_cpus_node(spu->node)) {
const struct cpumask *spumask = cpumask_of_node(spu->node),
*cpumask = cpumask_of_node(cpu_to_node(cpu));
if (!cpumask_intersects(spumask, cpumask))
return;
}
target = iic_get_target_id(cpu);
route = target << 48 | target << 32 | target << 16;
out_be64(&spu->priv1->int_route_RW, route);
}
static u64 mfc_dar_get(struct spu *spu)
{
return in_be64(&spu->priv1->mfc_dar_RW);
}
static u64 mfc_dsisr_get(struct spu *spu)
{
return in_be64(&spu->priv1->mfc_dsisr_RW);
}
static void mfc_dsisr_set(struct spu *spu, u64 dsisr)
{
out_be64(&spu->priv1->mfc_dsisr_RW, dsisr);
}
static void mfc_sdr_setup(struct spu *spu)
{
out_be64(&spu->priv1->mfc_sdr_RW, mfspr(SPRN_SDR1));
}
static void mfc_sr1_set(struct spu *spu, u64 sr1)
{
out_be64(&spu->priv1->mfc_sr1_RW, sr1);
}
static u64 mfc_sr1_get(struct spu *spu)
{
return in_be64(&spu->priv1->mfc_sr1_RW);
}
static void mfc_tclass_id_set(struct spu *spu, u64 tclass_id)
{
out_be64(&spu->priv1->mfc_tclass_id_RW, tclass_id);
}
static u64 mfc_tclass_id_get(struct spu *spu)
{
return in_be64(&spu->priv1->mfc_tclass_id_RW);
}
static void tlb_invalidate(struct spu *spu)
{
out_be64(&spu->priv1->tlb_invalidate_entry_W, 0ul);
}
static void resource_allocation_groupID_set(struct spu *spu, u64 id)
{
out_be64(&spu->priv1->resource_allocation_groupID_RW, id);
}
static u64 resource_allocation_groupID_get(struct spu *spu)
{
return in_be64(&spu->priv1->resource_allocation_groupID_RW);
}
static void resource_allocation_enable_set(struct spu *spu, u64 enable)
{
out_be64(&spu->priv1->resource_allocation_enable_RW, enable);
}
static u64 resource_allocation_enable_get(struct spu *spu)
{
return in_be64(&spu->priv1->resource_allocation_enable_RW);
}
const struct spu_priv1_ops spu_priv1_mmio_ops =
{
.int_mask_and = int_mask_and,
.int_mask_or = int_mask_or,
.int_mask_set = int_mask_set,
.int_mask_get = int_mask_get,
.int_stat_clear = int_stat_clear,
.int_stat_get = int_stat_get,
.cpu_affinity_set = cpu_affinity_set,
.mfc_dar_get = mfc_dar_get,
.mfc_dsisr_get = mfc_dsisr_get,
.mfc_dsisr_set = mfc_dsisr_set,
.mfc_sdr_setup = mfc_sdr_setup,
.mfc_sr1_set = mfc_sr1_set,
.mfc_sr1_get = mfc_sr1_get,
.mfc_tclass_id_set = mfc_tclass_id_set,
.mfc_tclass_id_get = mfc_tclass_id_get,
.tlb_invalidate = tlb_invalidate,
.resource_allocation_groupID_set = resource_allocation_groupID_set,
.resource_allocation_groupID_get = resource_allocation_groupID_get,
.resource_allocation_enable_set = resource_allocation_enable_set,
.resource_allocation_enable_get = resource_allocation_enable_get,
};

@ -1,14 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* spu hypervisor abstraction for direct hardware access.
*
* Copyright (C) 2006 Sony Computer Entertainment Inc.
* Copyright 2006 Sony Corp.
*/
#ifndef SPU_PRIV1_MMIO_H
#define SPU_PRIV1_MMIO_H
struct device_node *spu_devnode(struct spu *spu);
#endif /* SPU_PRIV1_MMIO_H */

@ -1,11 +1,12 @@
# SPDX-License-Identifier: GPL-2.0
config PPC_MICROWATT
depends on PPC_BOOK3S_64 && !SMP
depends on PPC_BOOK3S_64
bool "Microwatt SoC platform"
select PPC_XICS
select PPC_ICS_NATIVE
select PPC_ICP_NATIVE
select PPC_UDBG_16550
select COMMON_CLK
help
This option enables support for FPGA-based Microwatt implementations.

@ -1 +1,2 @@
obj-y += setup.o rng.o
obj-$(CONFIG_SMP) += smp.o

@ -3,5 +3,6 @@
#define _MICROWATT_H
void microwatt_rng_init(void);
void microwatt_init_smp(void);
#endif /* _MICROWATT_H */

@ -29,15 +29,33 @@ static int __init microwatt_populate(void)
}
machine_arch_initcall(microwatt, microwatt_populate);
static int __init microwatt_probe(void)
{
/* Main reason for having this is to start the other CPU(s) */
if (IS_ENABLED(CONFIG_SMP))
microwatt_init_smp();
return 1;
}
static void __init microwatt_setup_arch(void)
{
microwatt_rng_init();
}
static void microwatt_idle(void)
{
if (!prep_irq_for_idle_irqsoff())
return;
__asm__ __volatile__ ("wait");
}
define_machine(microwatt) {
.name = "microwatt",
.compatible = "microwatt-soc",
.probe = microwatt_probe,
.init_IRQ = microwatt_init_IRQ,
.setup_arch = microwatt_setup_arch,
.progress = udbg_progress,
.power_save = microwatt_idle,
};

@ -0,0 +1,80 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* SMP support functions for Microwatt
* Copyright 2025 Paul Mackerras <paulus@ozlabs.org>
*/
#include <linux/kernel.h>
#include <linux/smp.h>
#include <linux/io.h>
#include <asm/early_ioremap.h>
#include <asm/ppc-opcode.h>
#include <asm/reg.h>
#include <asm/smp.h>
#include <asm/xics.h>
#include "microwatt.h"
static void __init microwatt_smp_probe(void)
{
xics_smp_probe();
}
static void microwatt_smp_setup_cpu(int cpu)
{
if (cpu != 0)
xics_setup_cpu();
}
static struct smp_ops_t microwatt_smp_ops = {
.probe = microwatt_smp_probe,
.message_pass = NULL, /* Use smp_muxed_ipi_message_pass */
.kick_cpu = smp_generic_kick_cpu,
.setup_cpu = microwatt_smp_setup_cpu,
};
/* XXX get from device tree */
#define SYSCON_BASE 0xc0000000
#define SYSCON_LENGTH 0x100
#define SYSCON_CPU_CTRL 0x58
void __init microwatt_init_smp(void)
{
volatile unsigned char __iomem *syscon;
int ncpus;
int timeout;
syscon = early_ioremap(SYSCON_BASE, SYSCON_LENGTH);
if (syscon == NULL) {
pr_err("Failed to map SYSCON\n");
return;
}
ncpus = (readl(syscon + SYSCON_CPU_CTRL) >> 8) & 0xff;
if (ncpus < 2)
goto out;
smp_ops = &microwatt_smp_ops;
/*
* Write two instructions at location 0:
* mfspr r3, PIR
* b __secondary_hold
*/
*(unsigned int *)KERNELBASE = PPC_RAW_MFSPR(3, SPRN_PIR);
*(unsigned int *)(KERNELBASE+4) = PPC_RAW_BRANCH(&__secondary_hold - (char *)(KERNELBASE+4));
/* enable the other CPUs, they start at location 0 */
writel((1ul << ncpus) - 1, syscon + SYSCON_CPU_CTRL);
timeout = 10000;
while (!__secondary_hold_acknowledge) {
if (--timeout == 0)
break;
barrier();
}
out:
early_iounmap((void *)syscon, SYSCON_LENGTH);
}

@ -17,6 +17,7 @@ config PPC_POWERNV
select MMU_NOTIFIER
select FORCE_SMP
select ARCH_SUPPORTS_PER_VMA_LOCK
select PPC_RADIX_BROADCAST_TLBIE
default y
config OPAL_PRD

@ -23,6 +23,7 @@ config PPC_PSERIES
select FORCE_SMP
select SWIOTLB
select ARCH_SUPPORTS_PER_VMA_LOCK
select PPC_RADIX_BROADCAST_TLBIE
default y
config PARAVIRT
@ -128,6 +129,15 @@ config CMM
will be reused for other LPARs. The interface allows firmware to
balance memory across many LPARs.
config HTMDUMP
tristate "PowerVM data dumper"
depends on PPC_PSERIES && DEBUG_FS
default m
help
Select this option, if you want to enable the kernel debugfs
interface to dump the Hardware Trace Macro (HTM) function data
in the LPAR.
config HV_PERF_CTRS
bool "Hypervisor supplied PMU events (24x7 & GPCI)"
default y

@ -19,6 +19,7 @@ obj-$(CONFIG_HVC_CONSOLE) += hvconsole.o
obj-$(CONFIG_HVCS) += hvcserver.o
obj-$(CONFIG_HCALL_STATS) += hvCall_inst.o
obj-$(CONFIG_CMM) += cmm.o
obj-$(CONFIG_HTMDUMP) += htmdump.o
obj-$(CONFIG_IO_EVENT_IRQ) += io_event_irq.o
obj-$(CONFIG_LPARCFG) += lparcfg.o
obj-$(CONFIG_IBMVIO) += vio.o

@ -0,0 +1,121 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Copyright (C) IBM Corporation, 2024
*/
#define pr_fmt(fmt) "htmdump: " fmt
#include <linux/debugfs.h>
#include <linux/module.h>
#include <asm/io.h>
#include <asm/machdep.h>
#include <asm/plpar_wrappers.h>
static void *htm_buf;
static u32 nodeindex;
static u32 nodalchipindex;
static u32 coreindexonchip;
static u32 htmtype;
static struct dentry *htmdump_debugfs_dir;
static ssize_t htmdump_read(struct file *filp, char __user *ubuf,
size_t count, loff_t *ppos)
{
void *htm_buf = filp->private_data;
unsigned long page, read_size, available;
loff_t offset;
long rc;
page = ALIGN_DOWN(*ppos, PAGE_SIZE);
offset = (*ppos) % PAGE_SIZE;
rc = htm_get_dump_hardware(nodeindex, nodalchipindex, coreindexonchip,
htmtype, virt_to_phys(htm_buf), PAGE_SIZE, page);
switch (rc) {
case H_SUCCESS:
/* H_PARTIAL for the case where all available data can't be
* returned due to buffer size constraint.
*/
case H_PARTIAL:
break;
/* H_NOT_AVAILABLE indicates reading from an offset outside the range,
* i.e. past end of file.
*/
case H_NOT_AVAILABLE:
return 0;
case H_BUSY:
case H_LONG_BUSY_ORDER_1_MSEC:
case H_LONG_BUSY_ORDER_10_MSEC:
case H_LONG_BUSY_ORDER_100_MSEC:
case H_LONG_BUSY_ORDER_1_SEC:
case H_LONG_BUSY_ORDER_10_SEC:
case H_LONG_BUSY_ORDER_100_SEC:
return -EBUSY;
case H_PARAMETER:
case H_P2:
case H_P3:
case H_P4:
case H_P5:
case H_P6:
return -EINVAL;
case H_STATE:
return -EIO;
case H_AUTHORITY:
return -EPERM;
}
available = PAGE_SIZE;
read_size = min(count, available);
*ppos += read_size;
return simple_read_from_buffer(ubuf, count, &offset, htm_buf, available);
}
static const struct file_operations htmdump_fops = {
.llseek = NULL,
.read = htmdump_read,
.open = simple_open,
};
static int htmdump_init_debugfs(void)
{
htm_buf = kmalloc(PAGE_SIZE, GFP_KERNEL);
if (!htm_buf) {
pr_err("Failed to allocate htmdump buf\n");
return -ENOMEM;
}
htmdump_debugfs_dir = debugfs_create_dir("htmdump",
arch_debugfs_dir);
debugfs_create_u32("nodeindex", 0600,
htmdump_debugfs_dir, &nodeindex);
debugfs_create_u32("nodalchipindex", 0600,
htmdump_debugfs_dir, &nodalchipindex);
debugfs_create_u32("coreindexonchip", 0600,
htmdump_debugfs_dir, &coreindexonchip);
debugfs_create_u32("htmtype", 0600,
htmdump_debugfs_dir, &htmtype);
debugfs_create_file("trace", 0400, htmdump_debugfs_dir, htm_buf, &htmdump_fops);
return 0;
}
static int __init htmdump_init(void)
{
if (htmdump_init_debugfs())
return -ENOMEM;
return 0;
}
static void __exit htmdump_exit(void)
{
debugfs_remove_recursive(htmdump_debugfs_dir);
kfree(htm_buf);
}
module_init(htmdump_init);
module_exit(htmdump_exit);
MODULE_DESCRIPTION("PHYP Hardware Trace Macro (HTM) data dumper");
MODULE_LICENSE("GPL");

@ -52,7 +52,8 @@ enum {
enum {
DDW_EXT_SIZE = 0,
DDW_EXT_RESET_DMA_WIN = 1,
DDW_EXT_QUERY_OUT_SIZE = 2
DDW_EXT_QUERY_OUT_SIZE = 2,
DDW_EXT_LIMITED_ADDR_MODE = 3
};
static struct iommu_table *iommu_pseries_alloc_table(int node)
@ -1284,17 +1285,13 @@ static LIST_HEAD(failed_ddw_pdn_list);
static phys_addr_t ddw_memory_hotplug_max(void)
{
resource_size_t max_addr = memory_hotplug_max();
struct device_node *memory;
resource_size_t max_addr;
for_each_node_by_type(memory, "memory") {
struct resource res;
if (of_address_to_resource(memory, 0, &res))
continue;
max_addr = max_t(resource_size_t, max_addr, res.end + 1);
}
#if defined(CONFIG_NUMA) && defined(CONFIG_MEMORY_HOTPLUG)
max_addr = hot_add_drconf_memory_max();
#else
max_addr = memblock_end_of_DRAM();
#endif
return max_addr;
}
@ -1331,6 +1328,54 @@ static void reset_dma_window(struct pci_dev *dev, struct device_node *par_dn)
ret);
}
/*
* Platforms support placing PHB in limited address mode starting with LoPAR
* level 2.13 implement. In this mode, the DMA address returned by DDW is over
* 4GB but, less than 64-bits. This benefits IO adapters that don't support
* 64-bits for DMA addresses.
*/
static int limited_dma_window(struct pci_dev *dev, struct device_node *par_dn)
{
int ret;
u32 cfg_addr, reset_dma_win, las_supported;
u64 buid;
struct device_node *dn;
struct pci_dn *pdn;
ret = ddw_read_ext(par_dn, DDW_EXT_RESET_DMA_WIN, &reset_dma_win);
if (ret)
goto out;
ret = ddw_read_ext(par_dn, DDW_EXT_LIMITED_ADDR_MODE, &las_supported);
/* Limited Address Space extension available on the platform but DDW in
* limited addressing mode not supported
*/
if (!ret && !las_supported)
ret = -EPROTO;
if (ret) {
dev_info(&dev->dev, "Limited Address Space for DDW not Supported, err: %d", ret);
goto out;
}
dn = pci_device_to_OF_node(dev);
pdn = PCI_DN(dn);
buid = pdn->phb->buid;
cfg_addr = (pdn->busno << 16) | (pdn->devfn << 8);
ret = rtas_call(reset_dma_win, 4, 1, NULL, cfg_addr, BUID_HI(buid),
BUID_LO(buid), 1);
if (ret)
dev_info(&dev->dev,
"ibm,reset-pe-dma-windows(%x) for Limited Addr Support: %x %x %x returned %d ",
reset_dma_win, cfg_addr, BUID_HI(buid), BUID_LO(buid),
ret);
out:
return ret;
}
/* Return largest page shift based on "IO Page Sizes" output of ibm,query-pe-dma-window. */
static int iommu_get_page_shift(u32 query_page_size)
{
@ -1398,7 +1443,7 @@ static struct property *ddw_property_create(const char *propname, u32 liobn, u64
*
* returns true if can map all pages (direct mapping), false otherwise..
*/
static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn, u64 dma_mask)
{
int len = 0, ret;
int max_ram_len = order_base_2(ddw_memory_hotplug_max());
@ -1417,6 +1462,9 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
bool pmem_present;
struct pci_dn *pci = PCI_DN(pdn);
struct property *default_win = NULL;
bool limited_addr_req = false, limited_addr_enabled = false;
int dev_max_ddw;
int ddw_sz;
dn = of_find_node_by_type(NULL, "ibm,pmemory");
pmem_present = dn != NULL;
@ -1443,7 +1491,6 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
* the ibm,ddw-applicable property holds the tokens for:
* ibm,query-pe-dma-window
* ibm,create-pe-dma-window
* ibm,remove-pe-dma-window
* for the given node in that order.
* the property is actually in the parent, not the PE
*/
@ -1463,6 +1510,20 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
if (ret != 0)
goto out_failed;
/* DMA Limited Addressing required? This is when the driver has
* requested to create DDW but supports mask which is less than 64-bits
*/
limited_addr_req = (dma_mask != DMA_BIT_MASK(64));
/* place the PHB in Limited Addressing mode */
if (limited_addr_req) {
if (limited_dma_window(dev, pdn))
goto out_failed;
/* PHB is in Limited address mode */
limited_addr_enabled = true;
}
/*
* If there is no window available, remove the default DMA window,
* if it's present. This will make all the resources available to the
@ -1509,6 +1570,15 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
goto out_failed;
}
/* Maximum DMA window size that the device can address (in log2) */
dev_max_ddw = fls64(dma_mask);
/* If the device DMA mask is less than 64-bits, make sure the DMA window
* size is not bigger than what the device can access
*/
ddw_sz = min(order_base_2(query.largest_available_block << page_shift),
dev_max_ddw);
/*
* The "ibm,pmemory" can appear anywhere in the address space.
* Assuming it is still backed by page structs, try MAX_PHYSMEM_BITS
@ -1517,23 +1587,21 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
*/
len = max_ram_len;
if (pmem_present) {
if (query.largest_available_block >=
(1ULL << (MAX_PHYSMEM_BITS - page_shift)))
if (ddw_sz >= MAX_PHYSMEM_BITS)
len = MAX_PHYSMEM_BITS;
else
dev_info(&dev->dev, "Skipping ibm,pmemory");
}
/* check if the available block * number of ptes will map everything */
if (query.largest_available_block < (1ULL << (len - page_shift))) {
if (ddw_sz < len) {
dev_dbg(&dev->dev,
"can't map partition max 0x%llx with %llu %llu-sized pages\n",
1ULL << len,
query.largest_available_block,
1ULL << page_shift);
len = order_base_2(query.largest_available_block << page_shift);
len = ddw_sz;
dynamic_mapping = true;
} else {
direct_mapping = !default_win_removed ||
@ -1547,8 +1615,9 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
*/
if (default_win_removed && pmem_present && !direct_mapping) {
/* DDW is big enough to be split */
if ((query.largest_available_block << page_shift) >=
MIN_DDW_VPMEM_DMA_WINDOW + (1ULL << max_ram_len)) {
if ((1ULL << ddw_sz) >=
MIN_DDW_VPMEM_DMA_WINDOW + (1ULL << max_ram_len)) {
direct_mapping = true;
/* offset of the Dynamic part of DDW */
@ -1559,8 +1628,7 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
dynamic_mapping = true;
/* create max size DDW possible */
len = order_base_2(query.largest_available_block
<< page_shift);
len = ddw_sz;
}
}
@ -1600,7 +1668,7 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
if (direct_mapping) {
/* DDW maps the whole partition, so enable direct DMA mapping */
ret = walk_system_ram_range(0, memblock_end_of_DRAM() >> PAGE_SHIFT,
ret = walk_system_ram_range(0, ddw_memory_hotplug_max() >> PAGE_SHIFT,
win64->value, tce_setrange_multi_pSeriesLP_walk);
if (ret) {
dev_info(&dev->dev, "failed to map DMA window for %pOF: %d\n",
@ -1689,7 +1757,7 @@ out_remove_win:
__remove_dma_window(pdn, ddw_avail, create.liobn);
out_failed:
if (default_win_removed)
if (default_win_removed || limited_addr_enabled)
reset_dma_window(dev, pdn);
fpdn = kzalloc(sizeof(*fpdn), GFP_KERNEL);
@ -1708,6 +1776,9 @@ out_unlock:
dev->dev.bus_dma_limit = dev->dev.archdata.dma_offset +
(1ULL << max_ram_len);
dev_info(&dev->dev, "lsa_required: %x, lsa_enabled: %x, direct mapping: %x\n",
limited_addr_req, limited_addr_enabled, direct_mapping);
return direct_mapping;
}
@ -1833,8 +1904,11 @@ static bool iommu_bypass_supported_pSeriesLP(struct pci_dev *pdev, u64 dma_mask)
{
struct device_node *dn = pci_device_to_OF_node(pdev), *pdn;
/* only attempt to use a new window if 64-bit DMA is requested */
if (dma_mask < DMA_BIT_MASK(64))
/* For DDW, DMA mask should be more than 32-bits. For mask more then
* 32-bits but less then 64-bits, DMA addressing is supported in
* Limited Addressing mode.
*/
if (dma_mask <= DMA_BIT_MASK(32))
return false;
dev_dbg(&pdev->dev, "node is %pOF\n", dn);
@ -1847,7 +1921,7 @@ static bool iommu_bypass_supported_pSeriesLP(struct pci_dev *pdev, u64 dma_mask)
*/
pdn = pci_dma_find(dn, NULL);
if (pdn && PCI_DN(pdn))
return enable_ddw(pdev, pdn);
return enable_ddw(pdev, pdn, dma_mask);
return false;
}
@ -2349,11 +2423,17 @@ static int iommu_mem_notifier(struct notifier_block *nb, unsigned long action,
struct memory_notify *arg = data;
int ret = 0;
/* This notifier can get called when onlining persistent memory as well.
* TCEs are not pre-mapped for persistent memory. Persistent memory will
* always be above ddw_memory_hotplug_max()
*/
switch (action) {
case MEM_GOING_ONLINE:
spin_lock(&dma_win_list_lock);
list_for_each_entry(window, &dma_win_list, list) {
if (window->direct) {
if (window->direct && (arg->start_pfn << PAGE_SHIFT) <
ddw_memory_hotplug_max()) {
ret |= tce_setrange_multi_pSeriesLP(arg->start_pfn,
arg->nr_pages, window->prop);
}
@ -2365,7 +2445,8 @@ static int iommu_mem_notifier(struct notifier_block *nb, unsigned long action,
case MEM_OFFLINE:
spin_lock(&dma_win_list_lock);
list_for_each_entry(window, &dma_win_list, list) {
if (window->direct) {
if (window->direct && (arg->start_pfn << PAGE_SHIFT) <
ddw_memory_hotplug_max()) {
ret |= tce_clearrange_multi_pSeriesLP(arg->start_pfn,
arg->nr_pages, window->prop);
}

@ -12,7 +12,6 @@ obj-$(CONFIG_PPC_MSI_BITMAP) += msi_bitmap.o
obj-$(CONFIG_PPC_MPC106) += grackle.o
obj-$(CONFIG_PPC_DCR_NATIVE) += dcr-low.o
obj-$(CONFIG_PPC_PMI) += pmi.o
obj-$(CONFIG_U3_DART) += dart_iommu.o
obj-$(CONFIG_MMIO_NVRAM) += mmio_nvram.o
obj-$(CONFIG_FSL_SOC) += fsl_soc.o fsl_mpic_err.o

Some files were not shown because too many files have changed in this diff Show More