Stable Diffusion Proxy Issue | Generated by AI
Based on the error and your environment, here’s how to fix the “Unknown scheme for proxy URL” issue:
Solution:
The error occurs because httpx
doesn’t support socks://
proxies by default. You have two options:
Option 1: Remove the SOCKS Proxy Reference
- Check your environment variables for stray proxy settings:
env | grep -i proxy
Look for
socks://
references (especiallyALL_PROXY
orhttp_proxy
). - Unset the problematic variable (replace
VAR_NAME
with the actual name found above):unset VAR_NAME # e.g., unset ALL_PROXY
Option 2: Add SOCKS Support to httpx
- Activate your Python virtual environment:
source /home/lzw/Projects/stable-diffusion-webui/venv/bin/activate
- Install required dependencies:
pip install -U "httpx[socks]"
- Deactivate the environment:
deactivate
Why this happens:
- Your main proxies (
HTTP_PROXY/HTTPS_PROXY
) usehttp://
(supported), but something (likely another app/script) is setting asocks://
proxy thathttpx
can’t handle without extra packages. - The stack trace shows Gradio/
httpx
is trying to usesocks://127.0.0.1:7891
, which isn’t in yourcheckproxy
output.
Verification:
After fixing, run:
./webui.sh
The SOCKS proxy error should disappear.
💡 Note: If you didn’t intentionally set a SOCKS proxy, investigate applications that might auto-configure proxies (e.g., VPNs, network managers, or shell config files like
~/.bashrc
).
The error Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
indicates that PyTorch, used by Stable Diffusion WebUI, cannot detect or access a compatible GPU. This is likely because of missing CUDA libraries, an incompatible GPU, driver issues, or a configuration problem. Below are steps to fix this issue, tailored to your environment (glibc 2.35, Python 3.10.12, GCC 11.4.0, likely Ubuntu 22.04 based on glibc version).
Steps to Fix the Issue
1. Verify GPU and CUDA Compatibility
- Check if you have an NVIDIA GPU:
Run:
lspci | grep -i nvidia
This lists NVIDIA hardware. If no output appears, your system may not have an NVIDIA GPU, and PyTorch requires an NVIDIA GPU for CUDA support.
- Check NVIDIA driver installation:
Run:
nvidia-smi
If installed, this displays a table with GPU details (e.g., driver version, CUDA version). If not, install the NVIDIA driver:
sudo apt-get update sudo apt-get install nvidia-driver-<version> nvidia-utils-<version> -y
Replace
<version>
with the latest stable driver (e.g.,535
or550
). Find the appropriate driver version with:ubuntu-drivers devices sudo ubuntu-drivers autoinstall
- Check CUDA version:
PyTorch requires CUDA libraries. Check the installed CUDA version:
nvcc --version
If not installed, install CUDA Toolkit:
sudo apt-get install nvidia-cuda-toolkit -y
Alternatively, download the latest CUDA Toolkit from NVIDIA’s website (e.g., CUDA 11.8 or 12.1) and follow their installation guide.
2. Verify PyTorch Installation
The error suggests PyTorch is installed but cannot use the GPU. Ensure you have the correct PyTorch version with CUDA support.
- Check PyTorch installation:
Run:
python3 -c "import torch; print(torch.__version__); print(torch.cuda.is_available())"
Expected output should include a PyTorch version (e.g.,
2.0.1
) andTrue
fortorch.cuda.is_available()
. IfFalse
, PyTorch is not detecting the GPU. - Reinstall PyTorch with CUDA support:
For Python 3.10 and CUDA (e.g., 11.8), install PyTorch in your Stable Diffusion environment:
cd /home/lzw/Projects/stable-diffusion-webui source venv/bin/activate pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
Replace
cu118
with your CUDA version (e.g.,cu121
for CUDA 12.1). Check supported versions at PyTorch’s official site. - Verify after reinstallation:
Run the check again:
python3 -c "import torch; print(torch.cuda.is_available()); print(torch.cuda.get_device_name(0))"
3. Bypass the CUDA Check (Temporary Workaround)
If you want to run Stable Diffusion without GPU support (e.g., for testing on CPU), bypass the CUDA check by adding --skip-torch-cuda-test
to the command-line arguments.
- Edit
webui-user.sh
(or create it if it doesn’t exist):nano /home/lzw/Projects/stable-diffusion-webui/webui-user.sh
Add or modify the
COMMANDLINE_ARGS
line:export COMMANDLINE_ARGS="--skip-torch-cuda-test"
Save and exit.
- Run the script:
./webui.sh
This allows Stable Diffusion to run on CPU, but performance will be significantly slower.
4. Ensure TCMalloc is Properly Configured
Your output shows TCMalloc (libtcmalloc_minimal.so.4
) is detected and linked with LD_PRELOAD
. Confirm it’s working:
echo $LD_PRELOAD
If it outputs /lib/x86_64-linux-gnu/libtcmalloc_minimal.so.4
, you’re set. If not, set it manually:
export LD_PRELOAD=/lib/x86_64-linux-gnu/libtcmalloc_minimal.so.4
Or add it to webui-user.sh
:
export LD_PRELOAD=/lib/x86_64-linux-gnu/libtcmalloc_minimal.so.4
5. Check Environment Variables and Paths
Ensure your environment is correctly set up:
- Check LD_LIBRARY_PATH:
CUDA libraries must be accessible. Add them if needed:
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
Add this to
~/.bashrc
orwebui-user.sh
for persistence. - Activate virtual environment:
Always activate the Stable Diffusion virtual environment before running:
cd /home/lzw/Projects/stable-diffusion-webui source venv/bin/activate
6. Update Stable Diffusion WebUI
Your version (v1.10.1
, commit 82a973c
) may have compatibility issues. Update to the latest version:
cd /home/lzw/Projects/stable-diffusion-webui
git pull
Then, reinstall dependencies:
./webui.sh
7. Troubleshooting
- If
nvidia-smi
fails: Reinstall the NVIDIA driver or check for GPU hardware issues. - If PyTorch still doesn’t detect GPU:
- Ensure CUDA and cuDNN are installed correctly. Install cuDNN if missing:
sudo apt-get install libcudnn8
- Verify CUDA version compatibility with your GPU and PyTorch. For example, older GPUs (e.g., Kepler series) may not support CUDA 12.
- Ensure CUDA and cuDNN are installed correctly. Install cuDNN if missing:
- If errors persist: Check the Stable Diffusion logs for detailed errors:
cat /home/lzw/Projects/stable-diffusion-webui/log/webui.log
Share any specific errors for further assistance.
- Hardware limitations: If your GPU is too old (e.g., pre-2016 NVIDIA GPUs), it may not support modern CUDA versions. In this case, use
--skip-torch-cuda-test
or upgrade your GPU.
8. Run the Application
After applying the fixes, run the WebUI:
cd /home/lzw/Projects/stable-diffusion-webui
./webui.sh
If you fixed the GPU issue, the error should disappear. If using --skip-torch-cuda-test
, it will run on CPU.
Notes
- System Info: Your setup (glibc 2.35, Python 3.10.12, GCC 11.4.0) suggests Ubuntu 22.04, which is compatible with CUDA 11.8 or 12.1 and recent PyTorch versions.
- Performance: Running on CPU (with
--skip-torch-cuda-test
) is a fallback but will be slow. Fixing GPU support is recommended for Stable Diffusion. - TCMalloc: The TCMalloc issue seems resolved based on your output. Ensure
LD_PRELOAD
remains set if you encounter memory-related issues.
If you still face issues or need help with specific error messages, please provide additional details (e.g., GPU model, CUDA version, or full logs), and I can refine the solution!