Skip to content

Journal Timo de Boer

As a developer, I would like to know if all the buttons should be on the same side.

A good button is very important for a good functioning remote. If for example every time you want to press a button on the cane you have to stop walking, search for the button and then press the button, then the remote will be extremely inefficient.

What is important for making a functioning remote? The buttons must have a convenient size. If the buttons are tiny they are harder to press. If the buttons are too big the remote will become unnecessarily big. Therefore the spacing between buttons must be calculated. If the buttons are very close together, people are more likely to press the wrong buttons and if the buttons are spaced further apart they are harder to find and the remote will become inconvenient to use. The majority of people (90%, according to WebMD) use their right hand when operating a cane, phone, or remote control. As the thumb is the most dexterous and versatile finger, it is the most commonly used finger for pressing buttons on these devices. Therefore, it is convenient to have buttons placed within easy reach of the right thumb to facilitate comfortable and efficient use for most users.

"thumb reach"

Where can the right thumb reach? In the above picture you can see that users have a hard time reaching places above the position of the hand. If you hold a cilinder most people have either their thumb or their indexfinger on top of the cilinder. When the buttons are not on the same side for example like this:

"figma design buttons"

This makes it harder for the user to press the button on the back and the right of the cane. When the buttons are placed on top like this:

"figma design buttons"

Or this:

"figma design buttons"

It is easier for the index finger or thumb which is placed on top to press the button.

As a developer, I would like to know if a cross button layout is beneficial.

"figma design buttons" This is the cross design I came up with for the five buttons we need. At least five buttons are needed because, in the main app from Step-Hear, there are 4 main buttons labelled 1, 2, 3 and 4. We are also thinking about implementing a voice assistant, so we are also going to need a button for that. The cross-button layout is used in a lot of famous consoles for example the the Xbox, PS4 and Wii. The cross layout is mostly used for adding the directions left, right, up and down. In our case, we could make the cross into buttons 1 to 4 and make a button in the centre for the voice assistant.

The buttons of the cross layout have more spacing between the buttons than the square layout, this could help with error prevention.

As a developer, I would like to know if a square button layout is beneficial so that we can have the most ergonomic design as possible.

"figma design buttons"

The square button layout is mostly used for calculators, keyboards and tv remotes. Because of the following reasons, the square button layout is very reliable:

  1. Consistency: A square button layout provides a consistent look and feel across an interface, making it easier for users to understand and use. When buttons are the same size and shape, users can easily predict where to find a button and how to interact with it.

  2. Space efficiency: A square button layout allows for maximum use of space, making it an ideal choice for mobile and small screen devices where screen real estate is limited.

  3. Ease of use: With a square button layout, buttons can be placed close together, reducing the need for scrolling or zooming. This makes it easier for users to navigate an interface and find what they need quickly.

As a developer I want to understand the code that the earlier team wrote for their mobile application by learning the basics of React native.

What is React Native?

React Native is an open-source framework for building mobile applications using JavaScript and React, which is a popular JavaScript library for building user interfaces. It was created by Facebook and first released in 2015.

React Native allows developers to build native mobile apps for iOS, Android, and other platforms using a single codebase. This means that developers can use their existing knowledge of JavaScript and React to create high-performance mobile apps that look and feel like native apps, without having to learn separate programming languages and frameworks for each platform.

React Native uses native components, such as text, images, and buttons, to create a rich user experience. It also provides access to native APIs, such as camera, location, and contacts, so that developers can create powerful and fully-featured apps.

One of the key benefits of React Native is that it enables rapid development and iteration, as changes to the code can be instantly previewed in a simulator or on a physical device. Additionally, React Native apps can be easily integrated with other web technologies, such as GraphQL, Redux, and Apollo, making it a versatile platform for building mobile apps.

Basics of React Native

Components: React Native is based on the concept of building UIs using reusable components. Components are self-contained pieces of code that can be combined to create more complex UIs.

JSX: React Native uses JSX (JavaScript XML) syntax to define the structure of components. JSX allows developers to write HTML-like code within their JavaScript code, making it easier to build and visualize the UI.

Styles: React Native uses CSS-like styles to control the appearance of components. Styles can be defined inline or in a separate stylesheet, and can be easily modified or overridden.

State: State is used to store and manage data within a component. When the state changes, React Native will automatically re-render the component to reflect the new data.

Props: Props (short for “properties”) are used to pass data from one component to another. Props are read-only, and cannot be modified by the component that receives them.

Native components: React Native provides a set of built-in native components that can be used to create a UI that looks and feels like a native app. These components include things like text, images, buttons, and input fields.

Platform-specific code: React Native allows developers to write platform-specific code when necessary, using the “Platform” module. This allows developers to create platform-specific behaviors and UIs for their apps.

Code example

How does React Native look like in code?

class App extends Component {
  render() {
    return (
      <View>
        <Text>Hello, world!</Text>
      </View>
    );
  }
}
This code will return the text “Hello, world!” in the App View. This text still has no styling. If you would like to add styling than you need the following text:

const styles = StyleSheet.create({
  container: {
    flex: 1,
    justifyContent: 'center',
    alignItems: 'center',
    backgroundColor: '#fff',
  },
  text: {
    fontSize: 24,
    fontWeight: 'bold',
    color: '#007aff',
  },
});
and change the text style to apply the styling.
<Text style={styles.text}>Hello, world!</Text>
If you would like the text to function as button, then you need a TouchableOpacity around the button like this:
    <TouchableOpacity onPress={this.handleClick}>
        <Text style={textClicked ? styles.clickedText : styles.text}>
            {textClicked ? 'You clicked me!' : 'Click me!'}
        </Text>
    </TouchableOpacity>
In this case the button will change the the text You clicked me! to Click me! by using the method handleClick. In the method handleClick the boolean textClicked is changed to the opposite value. I will now list some of the other main components and a quick example: 1. Texfield
      <TextInput
        style={styles.input}
        onChangeText={onChangeText}
        value={text}
      />
2. Text
    <Text style={styles.baseText}>
      I am bold
      <Text style={styles.innerText}> and red</Text>
    </Text>
3. Image
      <Image
        style={styles.tinyLogo}
        source={{
          uri: 'https://reactnative.dev/img/tiny_logo.png',
        }}
      />
4. Button
<Button
  onPress={onPressLearnMore}
  title="Learn More"
  color="#841584"
/>
5. Scrollview
    <SafeAreaView style={styles.container}>
      <ScrollView style={styles.scrollView}>
        <Text style={styles.textStyle}>Large Text</Text>
      </ScrollView>
    </SafeAreaView>
6. Stylesheet
const styles = StyleSheet.create({
  container: {
    flex: 1,
    padding: 24,
    backgroundColor: '#eaeaea',
  },
  title: {
    borderWidth: 4,
    borderColor: '#20232a',
    backgroundColor: '#61dafb',
    color: '#20232a',
    textAlign: 'center',
    fontSize: 30,
    fontWeight: 'bold',
  },
});
7. Toast
  const showToast = () => {
    ToastAndroid.show('A wild toast appeared nearby !', ToastAndroid.SHORT);
  };
8. Link button
const supportedURL = 'https://google.com';

<OpenURLButton url={supportedURL}>Go to Google!</OpenURLButton>

const OpenURLButton = ({url, children}) => {
  const handlePress = useCallback(async () => {
    // Checking if the link is supported for links with custom URL scheme.
    const supported = await Linking.canOpenURL(url);

    if (supported) {
      // Opening the link with some app, if the URL scheme is "http" the web link should be opened
      // by some browser in the mobile
      await Linking.openURL(url);
    } else {
      Alert.alert(`Don't know how to open this URL: ${url}`);
    }
  }, [url]);

  return <Button title={children} onPress={handlePress} />;
};

How to open Google Assistant from React Native?

Firstly I started researching if it was possible to open the google assistant by opening a URL. In the documentation of React Native, I came across Linking. This can be used to open URLs and check if it’s possible to open the URL and when possible open the URL. It turns out that Google Assistant can’t be opened from a URL so this idea could be forgotten. An example of what I had in mind was:

    const url = "googleassistant://";
    // Checking if the link is supported for links with a custom URL scheme.
    const supported = await Linking.canOpenURL(url);

    if (supported) {
      // Opening the link with some app, if the URL scheme is "HTTP" the web link should be opened
      // by some browser on the mobile
      await Linking.openURL(url);
    } else {
      Alert.alert(`Don't know how to open this URL: ${url}`);
    }

The Voice library could also be used to make my voice assistant. For example, when a method was called the Voice recording could be started by the method:

    Voice.start("en-US")
This method starts listening for speech for a specific locale en-US. The method Voice.stop() could be used to make the app stop listening for speech. Now events like the following can be called to utilise the recorded speech:
    Voice.onSpeechResults(event) 
and
    Voice.onSpeechEnd(event)
These events should get the voice in a String format so it may be possible to send this text format towards a virtual assistant and then return a response to the user. The disadvantage of using the Voice library is that then we have to make our completely own Voice Assistant. This would likely be out of scope since we only want to open the Google Assistant from the phone and not make our own.

Now I am looking into using Android Intents to open the Google Assistant like the command:

    android.intent.action.VOICE_COMMAND
and
    android.intent.action.ACTION_VOICE_COMMAND
and
    android.intent.action.VOICE_ASSIST
This command should start the Voice command, so maybe this can be used in our application. After many tries and variants, we came to the conclusion that opening the Voice Assistant from our app likely wouldn’t be possible. I also asked for advice from Joey, who has used Google Assistant in his project before. He told me that the old way of using Google Assistant was deprecated and sent me some other links to look at. The links he recommended were sadly also a bust so we decided that we should stop looking for a solution for this problem and focus our attention elsewhere. The same problems occurred when researching the Apple assistant Siri.

ESP32 as IBeacon

#include "sys/time.h"

#include "BLEDevice.h"

#include "BLEUtils.h"

#include "BLEServer.h"

#include "BLEBeacon.h"

#include "esp_sleep.h"



#define GPIO_DEEP_SLEEP_DURATION     10  // sleep x seconds and then wake up

RTC_DATA_ATTR static time_t last;        // remember last boot in RTC Memory

RTC_DATA_ATTR static uint32_t bootcount; // remember number of boots in RTC Memory

BLEAdvertising *pAdvertising;   // BLE Advertisement type

struct timeval now;


#define BEACON_UUID "87c9098e-f3e3-11ed-a05b-0242ac120003" // UUID 1 128-Bit (may use linux tool uuidgen or random numbers via https://www.uuidgenerator.net/)


void setBeacon() {


  BLEBeacon oBeacon = BLEBeacon();

  oBeacon.setManufacturerId(0x4C00); // fake Apple 0x004C LSB (ENDIAN_CHANGE_U16!)

  oBeacon.setProximityUUID(BLEUUID(BEACON_UUID));

  oBeacon.setMajor((bootcount & 0xFFFF0000) >> 16);

  oBeacon.setMinor(bootcount & 0xFFFF);

  BLEAdvertisementData oAdvertisementData = BLEAdvertisementData();

  BLEAdvertisementData oScanResponseData = BLEAdvertisementData();


  oAdvertisementData.setFlags(0x04); // BR_EDR_NOT_SUPPORTED 0x04


  std::string strServiceData = "";


  strServiceData += (char)26;     // Len

  strServiceData += (char)0xFF;   // Type

  strServiceData += oBeacon.getData();

  oAdvertisementData.addData(strServiceData);


  pAdvertising->setAdvertisementData(oAdvertisementData);

  pAdvertising->setScanResponseData(oScanResponseData);

}


void setup() {


  Serial.begin(115200);

  gettimeofday(&now, NULL);

  Serial.printf("start ESP32 %d\n", bootcount++);

  Serial.printf("deep sleep (%lds since last reset, %lds since last boot)\n", now.tv_sec, now.tv_sec - last);

  last = now.tv_sec;


  // Create the BLE Device

  BLEDevice::init("ESP32 as iBeacon");

  // Create the BLE Server

  BLEServer *pServer = BLEDevice::createServer(); // <-- no longer required to instantiate BLEServer, less flash and ram usage

  pAdvertising = BLEDevice::getAdvertising();

  BLEDevice::startAdvertising();

  setBeacon();

  // Start advertising

  pAdvertising->start();

  Serial.println("Advertizing started...");

  delay(100);

  pAdvertising->stop();

  Serial.printf("enter deep sleep\n");

  esp_deep_sleep(1000000LL * GPIO_DEEP_SLEEP_DURATION);

  Serial.printf("in deep sleep\n");

}


void loop() {

}

Sources

https://www.webmd.com/brain/ss/slideshow-left-handed-vs-right https://reactnative.dev/docs/linking?syntax=android https://github.com/react-native-voice/voice


Last update: June 1, 2023