all
-
Is the fan speed adjustable?
The fan speed is controlled by our system automatically based on the system and device temperatures.
-
Could I mix different GPUs from different vendors on Falcon chassis, such as NVIDIA, AMD, etc.?
Yes, as long as those GPUs are on our compatibility list.
-
Is Ethernet needed for using Falcon chassis?
Yes, the management user interface of Falcon chassis is run through the 1Gb management port. The management port should be connected to the network.
-
Could different GPU models be used at the same time?
Yes, different GPU models could be installed inside Falcon 4005 / 4205 and used for one server host at the same time as long as they are on our compatibility list. GPU devices proved workable with Falcon 4005 / 4205 could be used as if they are directly integrated into the server host. However, the situation still varies from hardware manufacturer to hardware manufacturer. For example, Nvidia CUDA driver only allows GPU under the same model name and with the same memory size to run simultaneously.
-
How do I ensure the network security?
H3 Platform has adopted the third party vulnerability scanning software such as OpenVAS and Nessus. We fix the security issues based on the scan result on a regular basis and update the status on the Security Advisory page of the official website.
-
What is the maximum PCIe cable length?
We ship with the 2-meter mini-SAS HD cable while 1-meter could be ordered additionally.
-
How many GPUs can be added to a host machine?
The number of PCIe devices that a host machine can handle depends on the PCIe resources (MMIO size and bus numbers in particular) that it is capable of.
Our host bus adapter card would take up 13 buses, and the 4005 chassis (with devices fully installed) would take up 17 and 10 buses under standard mode and advanced mode respectively.
The MMIO size depends on the memory of your GPU devices. The MMIO size should be set (from the host BIOS) greater than the sum of all GPU devices. It is recommended to set MMIO size to at least 512 GB.
e.g., 4* NVIDIA A100 40GB = 160GB, therefore set MMIO size greater than 160 GB.
The host machine would experience booting failure when PCIe resource is insufficient for your configuration.
-
Could I add two HBA cards into one server host?
Yes, as long as there are enough PCIe slots on the server host.
-
How do I connect Falcon chassis to Ethernet?
The management user interface of Falcon 4005 or 4205 is a GUI (graphical user interface). Falcon 4005 or 4205 could be connected to the network via the RJ45 connector using a standard Ethernet cable. Then, users could access the GUI by the browser such as Google Chrome. Please ensure both Falcon 4005 (or 4205) and management server are in the same subnet or could be discovered.
-
What is the boot sequence after everything is being installed?
1. Power on Falcon chassis.
2. Power on the connected server hosts after Falcon chassis is on.
-
Could I use other PCIe HBA cards or cables to connect Falcon models?
No, please use the card and cable we ship only as they are designed specifically for our composable expansion chassis.
-
How do I get the Mac Address of Falcon chassis?
The MAC address is shown on the Overview page of GUI.
-
Can I install Nvidia RTX 30 series graphics cards?
Yes. The PCIe slots on Falcon GPU chassis are standard full height double-width slots. However, the length of the device should be less than 270mm. For devices that are wider than double width (>~44mm), users can still install it in the Falcon chassis by giving up the PCIe slots beside it.
However, graphics cards that have TDP over 300W are only supported by Falcon 4205 model.
-
Do I need to install anything on the server host?
No, no additional software is required to be installed on the server host as we simply use the device native drivers on the host OS.
-
What devices are supported?
You could find them from our compatibility list.
-
Could I use PCIe devices that are not on the compatibility list?
Basically, Falcon chassis supports any standard PCIe devices to be installed and distributed to the server hosts. However, H3 Platform could not ensure or guarantee no issue. H3 Platform should not be liable for the damages caused by the non-compatible devices.
-
What is the supported browser for the GUI of Falcon chassis?
You could use Google Chrome, Microsoft Edge, Mozilla Firefox to access the GUI of Falcon chassis. H3 Platform suggests that you use the latest version of the supported browsers.
-
Are Aux cables needed for GPU?
Yes, the AUX cable is needed for GPU requiring more than 75W power output. H3 Platform provides two types of AUX cables, 8pin to 8pin or 8pin to 8+6/8+8pin, for different GPU models. We ship with 8pin to 8pin while 8pin to 8+6/8+8pin can be ordered additionally.
-
What is the warranty period?
H3 Platform offers 2-year warranty on our products. Extended warranty, 1 year, 2 years and 3 years, could be purchased separately.
-
Is Ethernet needed for using H3 Center?
Yes, the management user interface of H3 Center is run through the 1Gb management port. All the Falcon models could be accessed by this 1Gb management port.
-
How much power is provided in each slot?
Falcon 4005:
300W. The maximum power output of each PCIe slot is 75W. The AUX power cable provides the extra 225W power output (we ship with 8pin to 8pin cable).
Falcon 4205:
450W. The maximum power output of each PCIe slot is 75W. The AUX power cable provide the extra 375W power output (8pin to 8pin or 8+8pin cable).
-
How many devices are supported in one Falcon chassis?
Falcon 4005 and 4205 could install up to 4 GPUs (four double-width PCIe 4.0 x16 slots) and 1 low-profile devices (PCIe 4.0 x16).
-
What chassis management CPU does Falcon chassis use?
Falcon 4005 and 4205 use AST2500 as the chassis management CPU.
-
Could a rail be used?
No, Falcon 4005 and 4205 do not support rail installation.
-
What are the requirements for the server host?
The requirements include:
1. The server host should be x86-based.
2. The server host should run the compatible operation systems.
3. The server host should have at least one standard low-profile PCIe slot for HBA card installation.
4. The server host should set BIOS to the following:
- Above 4G Decoding - Enable
- MMIOH Base = 56TB
- MMIO High Size = 1024G
5. The server host should install the drivers for the allocated devices, and H3 Platform only supports the devices on our compatibility list.
-
What should I do if Falcon chassis fails to connect to a 10M/100M switch?
Falcon chassis NIC port is compliant with IEEE 802.3ab (1000Base-T) standards only.
Failed connection could be expected.
-
What are the resolutions when PCI out of resource error for the server BIOS occurs?
When multiple PCIe adapters (e.g. GPU) are installed, the following errors could occur during POST, and the server host halts - PCI out of resource or Insufficient PCI Resources Detected.
To resolve the issue, please follow the following steps:
For Intel Xeon Phi Server
1. Temporarily remove the mini-SAS HD cable of Falcon 4005 (4205).
2. Update the BIOS and firmware to the latest version.
3. Disable any unused devices and Option ROMs in the BIOS.
- For onboard SATA/SAS controllers, go to Advanced > Mass Storage Controller Configuration
- For onboard NICs, go to Advanced > NIC Configuration.
4. Go to Advanced > PCI Configuration.
- Set Maximize Memory below 4 GB to Disabled
- Set Memory Mapped I/O above 4 GB to Enabled.
- Set Memory Mapped I/O Size to 512 G or higher.
5. Connect the mini-SAS HD cable of Falcon 4010 and see if the server host boots properly.
For Supermicro Xeon Phi Server
1. Temporarily remove the Mini-SAS HD cable of Falcon 4005 (4205).
2. Go to the BIOS Advanced
- Advanced->PCIe/PCI/PnP configuration-> Above 4G Decoding = Enabled
- Advanced->PCIe/PCI/PnP Configuration->MMIOH Base = 56T
- Advanced->PCIe/PCI/PnP Configuration->MMIO High Size = 512G or higher
3. Connect the Mini-SAS HD cable of Falcon 4005 (4205) and see if the server host boots properly.
-
How to resolve GPU peer-to-peer underperforming issue?
1. Make sure that your GPU model supports peer-to-peer function.
2. Disable the PCI Access Control Services (ACS) from host side. (See descriptions below)
IO virtualization (VT-d for Intel platform, or IOMMU for AMD platform) can interfere with GPU Direct by redirecting all PCI point-to-point traffic to the CPU root complex, causing a significant performance reduction or even a hang. You can check whether ACS is enabled on PCI bridges by executing following commands:
# sudo lspci -vvv | grep ACSCtl
If it shows “SrcValid+”, then ACS might be enabled. Looking at the full output of lspci, one can check if a PCI bridge has ACS enabled.
If PCI switches have ACS enabled, it needs to be disabled. On some systems this can be done from the BIOS by disabling IO virtualization or VT-d and ACS.
Disabling IO virtualization:
Host BIOS > IO or Advanced
Disable VT for Direct IO (VT-d) for Intel platforms.
Disable IOMMU for AMD platforms.
Other platforms may have different name for the IO virtualization function. Please ask your server vendor if the function cannot be found.
-
Failure to assign/remove a device.
Make sure that the device is on the compatible list.
Wait for a minute then retry assigning/ removing the device.
Make sure the device is in good condition.
- PCIe power cable is properly connected to the device.
- the device is properly plugged into the PCIe slot.
- Clean the PCIe slot and gold finger of the device.
- Run a device power-cycle
Make sure that the host is properly linked to the Falcon chassis
- the mini-SAS HD cables are properly connected.
- The HBA is properly installed.
- reboot the host machine.
Retry assigning/removing the device.
If it still fails, try rebooting the whole system.
-
Information does not display properly on GUI
- Try refreshing the page.
- Update the browser to the latest version.
- If the above steps do not fix the issue, try rebooting the Falcon GPU system.
-
Failure to access GUI
1. Make sure that the management port is connected to your network.
2. Make sure that the client and the Falcon system are under the same domain.
- If the LCD on the chassis is functioning, please check your network.
- If the LCD is not functioning, the BMC of Falcon system may have hanged, try rebooting the system
3. If you forget the IP address of Falcon GPU system or GUI log-in identity
- Check the LCD on the chassis for IP address.
- If that does not help, reset Falcon GPU system to default
-
Device link down issue
Please check if the device is on the compatible list of your Falcon GPU solution model.
If so, try rebooting the Falcon GPU system.
-
Host link down issue
Host link down can happen due to improper cable connection or incorrect boot sequence.
Please check the connection of mini-SAS HD cables on both host adapter and Falcon chassis. (make sure all the cables are properly plugged into the connectors.)
Booting sequence:
- Boot up Falcon GPU system. When the system is ready, the LCD should display "model name" and "IP address".
- Boot up the host machine(s) only when Falcon GPU system is ready.