Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Charts in dashboards are subject to frequent errors with Chrome users #16742

Closed
2 of 3 tasks
ValentinC-BR opened this issue Sep 20, 2021 · 9 comments
Closed
2 of 3 tasks
Labels
#bug Bug report

Comments

@ValentinC-BR
Copy link

Since Superset 1.x, all our Chrome users get recurrent errors on charts loading (in dashboards), forcing them to refresh the page

Expected results

All the charts are displayed without errors.

Actual results

The following error is (randomly ?) displayed : Unexpected error

When we click on "See More", here's the error we get :

<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
</body>
</html>
<!-- a padding to disable MSIE and Chrome friendly error page -->
<!-- a padding to disable MSIE and Chrome friendly error page -->
<!-- a padding to disable MSIE and Chrome friendly error page -->
<!-- a padding to disable MSIE and Chrome friendly error page -->
<!-- a padding to disable MSIE and Chrome friendly error page -->
<!-- a padding to disable MSIE and Chrome friendly error page -->

Important :

  • This can happen in one ore several charts.
  • This usually disappears when refreshing the page once or twice
  • This only happens in dashboards (we've never had this issue in "View chart in explore")
  • This seems to concern only Chrome users (Firefox users did not complain about this)

Screenshots

What we can see in dashboards :

image

What we see in "See more" :
image

How to reproduce the bug

  1. Use Google Chrome
  2. Open any dashboard

Environment

superset version: 1.3.0
python version: 3.7.9
node.js version: doesn't apply, I run on Kubernetes, using gunicorn as server
source : Athena

Checklist

Make sure to follow these steps before submitting your issue - thank you!

  • I have checked the superset logs for python stacktraces and included it here as text if there are any.
  • I have reproduced the issue with at least the latest released version of superset.
  • I have checked the issue tracker for the same issue and I haven't found one similar.

Additional context

/

@ValentinC-BR ValentinC-BR added the #bug Bug report label Sep 20, 2021
@ShimiBaliti
Copy link

@ValentinC-BR did you manage to solve this issue?

@ValentinC-BR
Copy link
Author

No.
I have no other choice than refreshing the page each time it occurs.

@nytai
Copy link
Member

nytai commented Nov 10, 2021

@ValentinC-BR This error is being thrown by nginx or whichever proxy is in front of superset. 502 means the service(superset) is unavailable, either the superset server hash crashed (out of memory or cpu, etc.), or is too busy to serve your request. This is error is due to your deployment set up and there's nothing we can do on the superset application side, at least not without the log/stack trace for what actually caused the service to report as unavailable.

Closing this issue as this is not actionable from an application perspective. If you do a root cause analysis that results in an issue with the superset application please feel free to reopen the ticket. For now I would suggest scaling up your superset deployment, either vertically or horizontally to see if that fixes it.

@nytai nytai closed this as completed Nov 10, 2021
@vivek-kandhvar
Copy link

@ValentinC-BR Even we are seeing this issue with 1.3.0 very frequently. Just wanted check if you had not faced this issue prior 1.x. Can you confirm if you haven't changed anything before moving to 1.x?

@Pinimo
Copy link

Pinimo commented Nov 16, 2021

Hi @vivek-kandhvar ! Nice to see we're not alone 😉 On the whole, I can answer we did not change much configuration when moving to 1.x (1.1 and 1.2 in our case). But our config is constantly evolving so I cannot guarantee it with perfect certainty.

@mdeshmu
Copy link
Contributor

mdeshmu commented May 27, 2022

@ValentinC-BR @ShimiBaliti @vivek-kandhvar @Pinimo I was getting frequent 502 errors during loading of charts in my dashboard.

This is how traffic flows from my superset setup: AWS ALB --> Gunicorn --> Superset app
I am using official superset docker image.

I modified timeout settings i.e. GUNICORN_TIMEOUT and SUPERSET_WEBSERVER_TIMEOUT but it didn't resolve the problem.
I have already increased SERVER_WORKER_AMOUNT to 8 and using default thread value of 20 but it didn't resolve the issue.
My ECS task cpu/memory are underutilized so scaling is not a cause of the problem.

Finally, I saw this blog which says solution is to keep Gunicorn --keep-alive more than alb idle timeout: https://www.tessian.com/blog/how-to-fix-http-502-errors/

Default value for --keep-alive is 2. Even Gunicorn's official documentation here https://docs.gunicorn.org/en/stable/settings.html#keepalive says:
"Generally set in the 1-5 seconds range for servers with direct connection to the client (e.g. when you don’t have separate load balancer). When gunicorn is deployed behind a load balancer, it often makes sense to set this to a higher value."

But run-server.sh in official docker image doesn't has an option for setting Gunicorn's --keep-alive to a custom value.
So I added this line in local copy of run-server.sh
--keep-alive ${GUNICORN_KEEPALIVE:-65}
and created a custom image by overwriting run-server.sh file in official image. This finally solved my problem.

I hope this helps people with similar setup as mine.

@skannan-maf
Copy link

I am in 3.1.0 and the issue still persists! This is marked as closed... but I wonder what is the fix?

@rusackas
Copy link
Member

rusackas commented Oct 4, 2024

This error is being thrown by nginx or whichever proxy is in front of superset. 502 means the service(superset) is unavailable, either the superset server hash crashed (out of memory or cpu, etc.), or is too busy to serve your request. This is error is due to your deployment set up and there's nothing we can do on the superset application side, at least not without the log/stack trace for what actually caused the service to report as unavailable.

You're going to have to choose between (1) troubleshooting your server logs, (2) asking for assistance here (though the thread will remain closed) (3) asking for assistance in #deploying-superset on slack, (4) avoiding all this by using a commercial/hosted Superset provider (I won't name names, but let me know if you need a recommendation).

@devysh1907
Copy link

@rusackas ,
Please name some

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
#bug Bug report
Projects
None yet
Development

No branches or pull requests

9 participants