Skip to content

Conversation

@mariam851
Copy link
Contributor

Add 'tol' option for termination criterion in SequentialFeatureSelector

Hi Sebastian,

I added a new tol option to SequentialFeatureSelector so that users can stop the feature selection early
if the improvement is smaller than the tolerance.

This works similarly to sklearn's tol parameter.

All tests run successfully locally. Happy to adjust anything if needed!

Thank you for the great library and examples!

Copy link
Owner

@rasbt rasbt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR. Overall, this looks good to me. However, I'd say these comments are unnecessary as it's clear what's changed in the file diff. There is no need to explicitly comment any changes like that.

n_jobs=1,
pre_dispatch="2*n_jobs",
clone_estimator=True,
tol=None, # Added: Option for early termination criterion with tolerance
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
tol=None, # Added: Option for early termination criterion with tolerance
tol=None,

feature_groups=None,
):
self.estimator = estimator
self.tol=tol # Added: Store the tolerance value
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
self.tol=tol # Added: Store the tolerance value
self.tol=tol

"avg_score": np.nanmean(k_score),
}

# Added: Initialization and basic validation for the 'tol' criterion ---
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# Added: Initialization and basic validation for the 'tol' criterion ---

"cv_scores": cv_scores,
"avg_score": k_score,
}
# Added: Update the overall best score achieved so far
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# Added: Update the overall best score achieved so far

if k_score > best_avg_score:
best_avg_score = k_score

# --- Added: Early stopping check based on 'tol' criterion ---
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# --- Added: Early stopping check based on 'tol' criterion ---

@rasbt
Copy link
Owner

rasbt commented Dec 14, 2025

Hi there, thanks for updating and removing some of the redundant comments. However, I see there are still lots of them left. It would be nice to clean it up before merging.

@mariam851 mariam851 reopened this Dec 20, 2025
@rasbt
Copy link
Owner

rasbt commented Dec 20, 2025

This looks good now, thanks!

All tests run successfully locally. Happy to adjust anything if needed!

Do you mean the existing tests, or do you have an additional test, regarding the tol feature, that you maybe forgot to add in this PR? If you don't have a new test for this, that's fine btw. The code looks pretty self-explanatory and I can't see any issues with it.

@mariam851
Copy link
Contributor Author

"Hi Sebastian,

Thanks for the feedback! To clarify, I was referring to the existing test suite, which passes locally on my machine. I haven't added a new dedicated test file for the tol feature yet, as I verified it manually, but I'd be more than happy to add an automated test case for it if you'd prefer!

Also, I’ve just performed a clean update (force-push) to:

Sync the branch with the latest master (to fix the out-of-date status).

Apply all your suggested formatting fixes (the comma in tol=None, and proper spacing).

Ensure the commit history is clean.

Regarding the failing Linux build, I'll keep an eye on it after this update to see if it was just a transient CI issue. Thanks again!"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants