// For example, this expression...
if (name != nil && [name isEqualToString:@"Steve"]) { ... }
// ...can be simplified to:
if ([name isEqualToString:@"steve"]) { ... }
Wow, that is some bad, bad code. Consider this snippet instead: if ([name isDifferentFromString:@"bob"]) { ... }I'd argue this is the way most methods are naturally written, so it doesn't have too much of an impact.
I'm struggling to think of an example where logically returning true for a nil receiver is a much more natural than the other way around. Most of them, including your example of isEqualToString vs. isDifferenceFromString, are at best equal.
It's probably something that should be made more explicit in discussions about nil swallowing, such as the OP.
I always check for nil explicitly, personally. It sidesteps any question of whether the method makes sense in the nil case. And sending messages to nil can't work in general, because the return value could be a struct.
if (![missleLauncher isDisabled]) { /* declare thermonuclear war */ }
When missleLauncher is nil, thermonuclear war is still declared which may not be the programmer's intent.
I'm not saying the example is correct, I'm pointing out that "I've never heard of it" / "Works on my computer" are dangerous attitudes to have.
Of course, there is one place where you have to go the other way: -isNil: does not work, you need to use -isNotNil
Unless you use the runtime's (private) nil-hook:
+(void)setNilHandler
{
installed=YES;
_objc_setNilReceiver([self nsNil]);
}
Then you can send messages to nil and they will be sent to your class. Of course, you want to mimic the nil-eating behavior as much as possible, so: static id idresult( id receiver, SEL selector, ... ) { return nil; }
+(BOOL)resolveInstanceMethod:(SEL)selector
{
class_addMethod(self, selector, (IMP)idresult , "@@:@");
return YES;
} NSString *val;
if(something){
val = @"hello";
}else if(somthingelse){
val = @"goodbye";
}
if(val){
[self display:val];
}
this can send you on a wild goose chase for not declaring val = nil initially."Using ARC, strong, weak, and autoreleasing stack variables are now implicitly initialized with nil."
This doesn't affect plain C variables, but it would solve the issue you had above.
Member variables of a class are set to zero by (init or alloc? can't remember.)
package main
import "fmt"
func main() {
var z []int = make([]int, 0, 0) // jerf changed this line
fmt.Println(z, len(z), cap(z))
if z == nil {
fmt.Println("nil!")
} else {
fmt.Println("initialized slice is not nil")
}
// This won't even compile; "cannot convert nil
// to type int" at the if clause:
// var a int = 0
// if a == nil {
// fmt.Println("jerf is wrong; nil does == 0")
// }
}
(Oh, and for those that don't know, on the other end of that tour link is the live Go tutorial, which allows you to run arbitrary Go code in your browser. You can fiddle with this live, with no installation of Go.)"C represents nothing as 0 for primitive value"
That's not true. There is no "nothing" for integer types, and floats either also lack "nothing", or have NaN as their "nothing", depending on your perspective.
One of the (many) annoying features of C-style NULL is that it means that some types are effectively option types while other types aren't, and there's no decent way to make them so. An NSString* is inherently "pointer to NSString or nothing", but an int is always an int, and I have to either carve out a special value to mean "nothing" (e.g. -1 used as the error return from a lot of UNIX syscalls) or use a separate flag.
But yeah, point well-taken. Perhaps I'll take the opportunity to wax on the other special values used in Foundation & CoreFoundation sometime.
However, 'int *' is inherently "pointer to int or nothing" (in the same style as your NSString example), so I'm not sure I agree with your assertion that some types are option types while some aren't in C. Any type can be an option type (in your sense of the term, again a la your NSString example) by making it into a pointer.
Similarly, using an NSString without a pointer (which might not be possible in ObjC, but definitely possible for regular C/C++ objects) makes it not optional, like the 'int'.
(Using the prevailing terminology. I'm not sure I'm comfortable with this use of the term "option type", but I'm rolling with it for now.)
if(!this) return false; // Or whatever
Of course it's open for debate as to whether this is something to encourage, but it can help from having to use a Null Object pattern in some situations.I think you mean non-virtual method, not 'class method' or static method which wouldn't have a this pointer.
Naming things is always difficult, but I meant class method, although what I should have said was non-virtual class method, which is possible in C++. Both virtual and non-virtual have "this" pointers, the difference is whether the class method to call is looked up at runtime or compile-time, so I still would consider both "methods". I didn't learn OOP from Smalltalk so I may be abusing the terminology (it's not intentional).
class X
{
int a;
public:
X() : a(123) { }
int get_a() { return this ? a : 1; }
};
class Y
{
int b;
public:
Y() : b(456) { }
int get_b() { return this ? b : 2; }
};
class Z : public X, public Y
{
int c;
public:
Z() : c(789) { }
int get_c() { return this ? c : 3; }
};
#include <iostream>
int main()
{
Z *z = NULL;
std::cout << z->get_a() << '\n';
std::cout << z->get_b() << '\n';
std::cout << z->get_c() << '\n';
}In the C Standard, an implementation can define NULL as either 0 or (void * ) 0. For an example of how that can go terribly wrong, consider the following:
printf ("%p\n", NULL)
on an architecture where sizeof (int) != sizeof (ptr)NULL should be defined as ((void * )0) for C always (and you should define your own NULL to enforce that if necessary in varargs functions). This is safe because C allows conversion from void * to other pointer types.
C++ does not allow conversion from void * to other pointer types, so what you were supposed to do was use plain 0, which introduced exactly the issue you mention (and which has bitten me before trying to use a C library from C++, where NULL was defined as plain 0).
With C++11 you should use nullptr, which does the right thing. Some implementations define NULL in C++ to a vendor-provided symbol which emulates nullptr so it might be safe to use NULL. If you have C++98 only and don't want to risk NULL then you should use static_cast<T*>(0) (where T is the actual pointer type), which is admittedly clunky.
So the bug you mention which bit you by using "C library from C++" was not caused by a difference between C and C++. It was caused by a difference in the definition of NULL between those two compilers. You could just as easily have found a C++ compiler which did work, or a C compiler which didn't.
Class _myClass;
// aClassName was previously defined
if((_myClass = NSClassFromString(aClassName)) == Nil)
[NSException raise:@"InvalidClass"
reason:@"Unknown class named %@.", aClassName);
id myObj = [[_myClass alloc] init];
[myObj whatever];Back when the book *"Effective Java" came out, a truly great advice was to use empty arrays everywhere you could instead of null.
The second enlightenment came to me when the smart brains at JetBrains decided to ship IntelliJ IDEA with the @NotNull annotation. I basically started to annotate as many methods returning something as @NotNull and then saw the number of NullPointerException go drastically down.
It's not possible everywhere but once you start to think about it, you realize that you don't need nearly as many null references as you think you did.
Then you started having IntelliJ IDEA warning you real-time (even on incomplete source file) about "reference xxx is never going to be null" (so the non-null check is pointless) or "warning: reference to xxx may be null". Great stuff. Years later there are still 99% of all the Java codebase not using the @NotNull annotation. Sad but true.
Now of course other language's take on the subject are interesting too: the maybe monad, the way Clojure deals with empty / nil sequences, etc.
Null as a valid value for every reference type by default is brain-dead.
It is one of my major gripes with modern C#.