TinyML algorithms, designed to operate on constrained devices such as those found in Internet of Things (IoT) systems, are vulnerable to adversarial threats, including fault injection attacks. These attacks exploit physical means to induce errors in computation, compromising the reliability of the device and the TinyML models running on it. This work studies the operation of TinyML models under fault injection attacks. Through systematic experimentation with voltage glitching fault injection attacks and EM fault injection attacks on microcontrollers, this work explores attack configurations that attackers can exploit to induce faults without triggering a system reset, which could result in practical attacks. This study analyzes four types of TinyML models, and demonstrates that all four evaluated TinyML models will generate inference outputs with reduced accuracy under the two types of physical fault injection attacks. Further, in some instances, it may be feasible for the attackers to use the fault to cause inference operations to return a predictable malicious output, not just random incorrect inference results. This highlights the need for more robust fault injection protection mechanisms in TinyML implementations. In order to provide one such protection, this work demonstrates use of random self-reductions and majority voting for intermediate values as a means to protect TinyML models.