Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem with samples #4

Open
kikomle opened this issue Mar 17, 2023 · 5 comments
Open

Problem with samples #4

kikomle opened this issue Mar 17, 2023 · 5 comments

Comments

@kikomle
Copy link

kikomle commented Mar 17, 2023

Screenshot 2023-03-17 at 11 11 42

I am getting this error constantly, I tried with test_size instead of train_size but I am getting the same result.

Here is my code:
`import csv
import datetime
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import seaborn as sns
import sklearn.metrics
import tensorflow as tf

from numpy import mean
from numpy import std
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from tensorflow.keras import activations
from tensorflow.keras import layers

from wwo_hist import retrieve_hist_data

BATCH_SIZE = 64
MELTING_TEMPERATURE = 2
MIN_SNOW_CM = 0.5 # Above this value, we consider it as snow
NUM_EPOCHS = 20
OUTPUT_DATASET_FILE = "snow_dataset.csv"
TFL_MODEL_FILE = "snow_forecast_model.tflite"
TFL_MODEL_HEADER_FILE = "snow_forecast_model.h"
TF_MODEL = "snow_forecast"

print("data import")

frequency = 1
api_key = '27a946a50c0e4b0daec134825230803'
location_list = ['canazei']
df_weather = retrieve_hist_data(api_key,
location_list,
'01-JAN-2011',
'31-DEC-2012',
frequency,
location_label = False,
export_csv = False,
store_df = True)
t_list = df_weather[0].tempC.astype(float).to_list()
h_list = df_weather[0].humidity.astype(float).to_list()
s_list = df_weather[0].totalSnow_cm.astype(float).to_list()

print("binarize")

def binarize(snow, threshold):
if snow > threshold:
return 1
else:
return 0

print("graphprint")

#s_bin_list = [binarize(snow, 0.5) for snow in s_list]
#cm = plt.colormaps.get_cmap('gray_r')
#plt.figure(dpi=150)
#sc = plt.scatter(t_list, h_list, c=s_bin_list, cmap=cm, label="Snow")
#plt.colorbar(sc)
#plt.grid(True)
#plt.title("Snow(T, H)")
#plt.xlabel("Temperature - °C")
#plt.ylabel("Humidity - %")
#plt.show()

print("labels")

def gen_label(snow, temperature):
if snow > 0.5 and temperature < 2:
return "Yes"
else:
return "No"

snow_labels = [gen_label(snow, temp) for snow, temp in zip(s_list, t_list)]

csv_header = ["Temp0", "Temp1", "Temp2", "Humi0", "Humi1", "Humi2", "Snow"]
df_dataset = pd.DataFrame(list(zip(t_list[:-2], t_list[1:-1], t_list[:-2], h_list[:-2], h_list[1:-1], h_list[:2], snow_labels[2:])), columns = csv_header)

df0 = df_dataset[df_dataset['Snow'] == "No"]
df1 = df_dataset[df_dataset['Snow'] == "Yes"]
if len(df1.index) < len(df0.index):
df0_sub = df0.sample(len(df1.index))
df_dataset = pd.concat([df0_sub, df1])
else:
df1_sub = df1.sample(len(df0.index))
df_dataset = pd.concat([df1_sub, df0])

t_list = df_dataset['Temp0'].tolist()
h_list = df_dataset['Humi0'].tolist()
t_list = t_list + df_dataset['Temp2'].tail(2).tolist()
h_list = t_list + df_dataset['Humi2'].tail(2).tolist()

t_avg = mean(t_list)
h_avg = mean(h_list)
t_std = std(t_list)
h_std = std(h_list)

print("COPY HERE !!!!!")
print("Temperature - [MEAN, STD]", round(t_avg, 5), round(t_std, 5))
print("Humidity - [MEAN, STD]", round(h_avg, 5), round(h_std, 5))

def scaling(val, avg, std):
return (val - avg) / std;

df_dataset['Temp0'] = df_dataset['Temp0'].apply(lambda x:scaling(x, t_avg, t_std))
df_dataset['Temp1'] = df_dataset['Temp1'].apply(lambda x:scaling(x, t_avg, t_std))
df_dataset['Temp2'] = df_dataset['Temp2'].apply(lambda x:scaling(x, t_avg, t_std))

df_dataset['Humi0'] = df_dataset['Humi0'].apply(lambda x:scaling(x, t_avg, t_std))
df_dataset['Humi1'] = df_dataset['Humi1'].apply(lambda x:scaling(x, t_avg, t_std))
df_dataset['Humi2'] = df_dataset['Humi2'].apply(lambda x:scaling(x, t_avg, t_std))

f_names = df_dataset.columns.values[0:6]
l_name = df_dataset.columns
x = df_dataset[f_names]
y = df_dataset[l_name]

labelencoder = LabelEncoder()
labelencoder.fit(y.Snow)
y_encoded = labelencoder.transform(y.Snow)

Split 1 (85% vs 15%)

x_train, x_validate_test, y_train, y_validate_test = train_test_split(x, y_encoded, train_size=0.15, random_state = 1)

Split 2 (50% vs 50%)

x_test, x_validate, y_test, y_validate = train_test_split(x_validate_test, y_validate_test, train_size=0.50, random_state = 3)

model = tf.keras.Sequential()
model.add(layers.Dense(12, activations='relu', input_shape=(len(f_names),)))
model.add(layers.Dropout(0.2))
model.add(layers.Dense(1, activation='sigmoid'))
model.summary()

`

@kikomle
Copy link
Author

kikomle commented Mar 17, 2023

I know it may be a rookie mistake as I have little experience with Python and more with Arduino. I think have included everything but I may be wrong.

@shruthis-shetty
Copy link

Hi @kikomle , are we good to close this issue or still need to be solved?

@kikomle
Copy link
Author

kikomle commented Mar 31, 2023

I resolved that issue but now my code stops after training

@kikomle
Copy link
Author

kikomle commented Mar 31, 2023

I solved this also by commenting the graph outputs, after those the program stops maybe there is a key to continue?

@zoldaten
Copy link

zoldaten commented Feb 7, 2024

see my comment of second edition of the book

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

3 participants