I'm trying to get predictions of the test dataset. I'm using a Sklearn Pipeline with MLPRegressor. However, I'm just getting the size of prediction from train set, even though I using 'test.csv'.
Where can I modify to obtain the predictions with lenght being the same as test data?
train_pipeline.py
# Read training data
data = pd.read_csv(data_path, sep=';', low_memory=False, parse_dates=parse_dates)
# Fill all None records
data[config.TARGET] = data[config.TARGET].fillna(0)
#
data[config.TARGET] = data[config.TARGET].apply(lambda x: split_join_string(x) if (type(x) == str and len(x.split('.')) > 0) else x)
# Divide train and test
X_train, X_test, y_train, y_test = train_test_split(
data[config.FEATURES],
data[config.TARGET],
test_size=0.1,
random_state=0) # we are setting the seed here
# Transform the target
y_train = y_train.apply(lambda x: np.log(float(x)) if x != 0 else 0)
y_test = y_test.apply(lambda x: np.log(float(x)) if x != 0 else 0)
data_test = pd.concat([X_test, y_test], axis=1)
# Save the dataset to a '.csv' file without index
data_test.to_csv(data_path_test, sep=';', index=False)
pipeline.order_pipe.fit(X_train[config.FEATURES],
y_train)
save_pipeline(pipeline_to_persist=pipeline.order_pipe)
predict.py
def make_prediction(*, input_data) -> dict:
"""Make a prediction using the saved model pipeline."""
data = pd.DataFrame(input_data)
validated_data = validate_inputs(input_data=data)
prediction = _order_pipe.predict(validated_data[config.FEATURES])
output = np.exp(prediction)
#score = _order_pipe.score(validated_data[config.FEATURES], validated_data[config.TARGET])
results = {'predictions': output, 'version': _version}
_logger.info(f'Making predictions with model version: {_version}'
f'\nInputs: {validated_data}'
f'\nPredictions: {results}')
return results
I expect the predictions be of size of 'test.csv', but the actual predictions has the size of 'train.csv'. Do I need to fit or transform the test dataset into 'order_pipe' to make predictions right size?
Aucun commentaire:
Enregistrer un commentaire