Handsfree input allows people to interact with the real-world without occupying their hands and is especially important for augmented reality headsets. Currently, dwell time is used with eye gaze and head pointing as a handsfree selection technique. However, prior work on improving dwell-time have not addressed unintended selections (i.e. the Midas Touch problem) for general, everyday use. This paper presents NodEverywhere, which uses deep learning to accurately detect a single head nod and uses backtracking algorithm to determine where the user clicked. We first conducted a 12-person target selection user study according to ISO 9241-9 and compared with head-dwell and gaze-dwell. Results showed that the performance of NodEverywhere has 93.85% accuracy. We then evaluated NodEverywhere in three real application scenarios (Gmail, YouTube, Holograms). The result demonstrated that NodEverywhere has lower unintentional triggered than dwell-based input. Additionally, Nod is significant faster than gaze-dwell in overall and similar to head-dwell technique. The preference from users showed that NodEverywhere is the most suitable confirmation technique especially in seldom selection scenario like YouTube. Based on these findings, we expect \projectName{} to be an efficient confirmation technique.